We introduce a new automatic evaluation method for speaker similarity assessment, that is consistent with human perceptual scores. Modern neural text-to-speech models require a vast amount of clean training data, which is why many solutions switch from single speaker models to solutions trained on examples from many different speakers. Multi-speaker models bring new possibilities, such as a faster creation of new voices, but also a new problem - speaker leakage, where the speaker identity of a synthesized example might not match those of the target speaker. Currently, the only way to discover this issue is through costly perceptual evaluations. In this work, we propose an automatic method for assessment of speaker similarity. For that purpose, we extend the recent work on speaker verification systems and evaluate how different metrics and speaker embeddings models reflect Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) scores. Our experiments show that we can train a model to predict speaker similarity MUSHRA scores from speaker embeddings with 0.96 accuracy and significant correlation up to 0.78 Pearson score at the utterance level.
翻译:我们引入了一种新的声音相似性评估自动评价方法,这与人类感知分数相符。现代神经文本到声音模型需要大量的清洁培训数据,这就是为什么许多解决方案从单一语言模型转换为从许多不同发言者的例子培训的解决方案的原因。多声音模型带来了新的可能性,例如更快地创造新声音,但也带来了一个新的问题 - 发言者渗漏,在这里,一个综合示例的发言者特征可能与目标发言者的特征不相符。目前,发现这一问题的唯一方法是昂贵的感知评估。在这项工作中,我们提出了一种对声音相似性的自动评估方法。为此,我们扩展了最近关于演讲者核查系统的工作,并评估不同指标和发言者嵌入模型如何反映多功能性隐蔽参考资料和音源(MUSHRA)的分数。我们的实验显示,我们可以训练一个模型,预测发言者在口腔层次上以0.96准确度和重要关联度高达0.78分的MUSHRA分数,从而与目标发言者的分数相类似。