Deep learning based models have significantly improved the performance of speech separation with input mixtures like the cocktail party. Prominent methods (e.g., frequency-domain and time-domain speech separation) usually build regression models to predict the ground-truth speech from the mixture, using the masking-based design and the signal-level loss criterion (e.g., MSE or SI-SNR). This study demonstrates, for the first time, that the synthesis-based approach can also perform well on this problem, with great flexibility and strong potential. Specifically, we propose a novel speech separation/enhancement model based on the recognition of discrete symbols, and convert the paradigm of the speech separation/enhancement related tasks from regression to classification. By utilizing the synthesis model with the input of discrete symbols, after the prediction of discrete symbol sequence, each target speech could be re-synthesized. Evaluation results based on the WSJ0-2mix and VCTK-noisy corpora in various settings show that our proposed method can steadily synthesize the separated speech with high speech quality and without any interference, which is difficult to avoid in regression-based methods. In addition, with negligible loss of listening quality, the speaker conversion of enhanced/separated speech could be easily realized through our method.
翻译:深层学习模型极大地改善了语言分离与鸡尾酒类等投入混合物的性能。突出的方法(例如频率域和时间域语音分离)通常会建立回归模型,利用遮掩式设计和信号级损失标准(例如MSE或SI-SNR),预测混合物的地面真实性言词。本研究首次表明,基于合成的方法也可以很好地解决这一问题,具有极大的灵活性和强大的潜力。具体地说,我们提出了基于识别离散符号的新颖的语音分离/增强模式,并将与语言分离/增强有关的任务的范式从回归转换为分类。在预测离散符号序列后,利用综合模型和离散符号输入,每个目标演讲可以重新合成大小。基于不同环境的WSJ0-2mix和VCTK-noisora的评估结果表明,我们提出的方法可以稳步地将分离的语音质量和不受到任何干扰地将语音分离的言词分离/增强性言词从回归到分类中转换为模式。在预测离散符号序列后,很难利用综合模型和独立符号输入的合成符号输入的合成质量,从而避免以可降低的语音损失的方法。