In this work, we propose a zero-shot voice conversion method using speech representations trained with self-supervised learning. First, we develop a multi-task model to decompose a speech utterance into features such as linguistic content, speaker characteristics, and speaking style. To disentangle content and speaker representations, we propose a training strategy based on Siamese networks that encourages similarity between the content representations of the original and pitch-shifted audio. Next, we develop a synthesis model with pitch and duration predictors that can effectively reconstruct the speech signal from its decomposed representation. Our framework allows controllable and speaker-adaptive synthesis to perform zero-shot any-to-any voice conversion achieving state-of-the-art results on metrics evaluating speaker similarity, intelligibility, and naturalness. Using just 10 seconds of data for a target speaker, our framework can perform voice swapping and achieves a speaker verification EER of 5.5% for seen speakers and 8.4% for unseen speakers.
翻译:在这项工作中,我们提出一种零光语音转换方法,使用经过自我监督学习培训的语音表达方式。 首先,我们开发了一个多任务模型,将语音表达方式分解成语言内容、发言者特点和发言风格等特征。为了分解内容和发言者表达方式,我们提议了一个基于Siamees网络的培训战略,鼓励原声和音频内容表达方式相似性。 其次,我们开发了一个组合和持续时间预测器的合成模型,能够有效地从已分解的表达方式中重建语音信号。我们的框架允许可控和扩音式合成,在评价演讲者的相似性、智能性和自然性方面实现零光学化的任何语音转换,在评价演讲者相似性、智能性和自然性方面实现最先进的结果。我们的框架仅使用10秒钟的数据,就可以对目标发言者进行语音转换,实现语音转换,使发言者的EER为所见发言者的5.5%,对看不见的发言者进行8.4%的核实。