We generalize the convolutional NMF by taking the $\beta$-divergence as the loss function, add a regularizer for sparsity in the form of an elastic net, and provide multiplicative update rules for its factors in closed form. The new update rules embed the $\beta$-NMF, the standard convolutional NMF, and sparse coding alias basis pursuit. We demonstrate that the originally published update rules for the convolutional NMF are suboptimal and that their convergence rate depends on the size of the kernel.
翻译:我们以$\beta$-digence作为损失函数,对革命性NMF加以普及,增加弹性网形式的聚变调节器,并为其因素提供封闭式的倍增更新规则。新的更新规则包含了$\beta$-NMF、标准革命性NMF和稀疏的编码别名。我们证明原先公布的关于革命性NMF的更新规则不理想,其趋同率取决于内核的大小。