This paper presents a novel framework to build a voice conversion (VC) system by learning from a text-to-speech (TTS) synthesis system, that is called TTS-VC transfer learning. We first develop a multi-speaker speech synthesis system with sequence-to-sequence encoder-decoder architecture, where the encoder extracts robust linguistic representations of text, and the decoder, conditioned on target speaker embedding, takes the context vectors and the attention recurrent network cell output to generate target acoustic features. We take advantage of the fact that TTS system maps input text to speaker independent context vectors, and reuse such a mapping to supervise the training of latent representations of an encoder-decoder voice conversion system. In the voice conversion system, the encoder takes speech instead of text as input, while the decoder is functionally similar to TTS decoder. As we condition the decoder on speaker embedding, the system can be trained on non-parallel data for any-to-any voice conversion. During voice conversion training, we present both text and speech to speech synthesis and voice conversion networks respectively. At run-time, the voice conversion network uses its own encoder-decoder architecture. Experiments show that the proposed approach outperforms two competitive voice conversion baselines consistently, namely phonetic posteriorgram and variational autoencoder methods, in terms of speech quality, naturalness, and speaker similarity.
翻译:本文提供了一个创新框架,通过从文本到语音(TTS)合成系统(TTS-VC)学习来建立语音转换系统(VC),称为TTS-VC传输学习。我们首先开发一个多语音合成系统,配有序列到序列的编码器解码器结构,其中编码器提取了强有力的文字语言表达方式,而解码器则以目标扬声器嵌入为条件,将上下文矢量和注意力循环网络单元格输出用于生成目标音频功能。我们利用TTS系统将输入文本映射到演讲者的独立环境矢量,并再利用这种映射来监督对编码-解码声音转换系统潜在表现的培训。在语音转换系统中,编码器将语言代替文字作为输入,而解码器在功能上与 TTS decoder 解码功能相似。随着我们将解码器附加在发言者嵌入的语音显示器上,该系统可以就任何语音转换的非语言数据进行培训。在语音转换培训期间,我们将语言和语音转换的语音结构分别用于语言和语音变换结构。