Supervised neural network training has led to significant progress on single-channel sound separation. This approach relies on ground truth isolated sources, which precludes scaling to widely available mixture data and limits progress on open-domain tasks. The recent mixture invariant training (MixIT) method enables training on in-the-wild data; however, it suffers from two outstanding problems. First, it produces models which tend to over-separate, producing more output sources than are present in the input. Second, the exponential computational complexity of the MixIT loss limits the number of feasible output sources. In this paper we address both issues. To combat over-separation we introduce new losses: sparsity losses that favor fewer output sources and a covariance loss that discourages correlated outputs. We also experiment with a semantic classification loss by predicting weak class labels for each mixture. To handle larger numbers of sources, we introduce an efficient approximation using a fast least-squares solution, projected onto the MixIT constraint set. Our experiments show that the proposed losses curtail over-separation and improve overall performance. The best performance is achieved using larger numbers of output sources, enabled by our efficient MixIT loss, combined with sparsity losses to prevent over-separation. On the FUSS test set, we achieve over 13 dB in multi-source SI-SNR improvement, while boosting single-source reconstruction SI-SNR by over 17 dB.
翻译:监督神经网络培训导致单通道声音分离方面的显著进展。 这种方法依靠地面的真相孤立来源, 无法扩大现有广泛混合数据的范围, 并限制开放域任务的进展。 最近混合的变异培训方法( MixIT) 使得培训能够进行动态数据培训; 但是,它有两个未决问题。 首先, 它产生的模型往往过于分离, 产生的产出源比投入中存在的要多。 第二, MixIT 损失的指数计算复杂性限制了可行的产出源的数量。 在本文中,我们讨论这两个问题。 为了消除过度分离,我们引入了新的损失: 有利于减少产出源的紧张性损失和抑制相关产出的共变异性损失。 我们还试验静态分类损失,预测每种混合物的等级标签较弱。 为了处理更多的来源,我们采用快速的最小方解决方案引入高效的近似值。 我们的实验显示, 拟议的损失会减少过度隔离,改善总体性能。 最佳的性能是使用更大的产出源数,我们通过IMIS 实现单一源的升级, 测试系统损失。