The problem of speech separation, also known as the cocktail party problem, refers to the task of isolating a single speech signal from a mixture of speech signals. Previous work on source separation derived an upper bound for the source separation task in the domain of human speech. This bound is derived for deterministic models. Recent advancements in generative models challenge this bound. We show how the upper bound can be generalized to the case of random generative models. Applying a diffusion model Vocoder that was pretrained to model single-speaker voices on the output of a deterministic separation model leads to state-of-the-art separation results. It is shown that this requires one to combine the output of the separation model with that of the diffusion model. In our method, a linear combination is performed, in the frequency domain, using weights that are inferred by a learned model. We show state-of-the-art results on 2, 3, 5, 10, and 20 speakers on multiple benchmarks. In particular, for two speakers, our method is able to surpass what was previously considered the upper performance bound.
翻译:语言分离问题(又称鸡尾酒党问题)是指将单一语言信号从混合语言信号中分离出来的任务。关于源分离的以往工作在人类语言领域为源分离任务得出了一个上限。这一界限来自确定性模型。最近基因化模型的进展对此提出了挑战。我们展示了如何将上界限普遍化为随机基因化模型。应用了一种扩散模型Vocoder,该模型在决定性分离模型的产出上先于对单一声音进行模拟,从而得出了最先进的分离结果。我们显示,这需要将分离模型的输出与扩散模型的输出结合起来。在我们的方法中,在频率领域,使用一个通过学习模型推断的重量进行线性组合。我们用多个基准来显示2、3、5、10和20个发言者的状态。特别是对于两个发言者,我们的方法能够超过先前认为的高级性能约束。