Speech quality assessment has been a critical issue in speech processing for decades. Existing automatic evaluations usually require clean references or parallel ground truth data, which is infeasible when the amount of data soars. Subjective tests, on the other hand, do not need any additional clean or parallel data and correlates better to human perception. However, such a test is expensive and time-consuming because crowd work is necessary. It thus becomes highly desired to develop an automatic evaluation approach that correlates well with human perception while not requiring ground truth data. In this paper, we use self-supervised pre-trained models for MOS prediction. We show their representations can distinguish between clean and noisy audios. Then, we fine-tune these pre-trained models followed by simple linear layers in an end-to-end manner. The experiment results showed that our framework outperforms the two previous state-of-the-art models by a significant improvement on Voice Conversion Challenge 2018 and achieves comparable or superior performance on Voice Conversion Challenge 2016. We also conducted an ablation study to further investigate how each module benefits the task. The experiment results are implemented and reproducible with publicly available toolkits.
翻译:现有自动评估通常需要干净的参考或平行的地面真相数据,而当数据数量猛增时,这些数据是无法区分的。另一方面,主观测试不需要额外的清洁或平行数据,也不需要与人类感知更好地相关。然而,这种测试费用昂贵且耗费时间,因为需要人群工作。因此,人们非常希望开发一种自动评估方法,既能与人类感知紧密相关,又不需要地面真相数据。在本文中,我们使用自我监督的预先培训模型来进行MOS预测。我们展示它们的表现可以区分清洁和吵闹的音频。然后,我们对这些预先培训的模型进行细化,然后以端到端的方式以简单的线性层跟踪。实验结果表明,我们的框架通过显著改进2018年语音转换挑战,超越了前两个最先进的模型,并在2016年语音转换挑战上取得了可比或优异的性能。我们还进行了一项模拟研究,以进一步调查每个模块如何使任务受益。实验结果得到实施,并且可以与公开提供的工具箱重新解释。