Cross-lingual voice conversion (VC) is an important and challenging problem due to significant mismatches of the phonetic set and the speech prosody of different languages. In this paper, we build upon the neural text-to-speech (TTS) model, i.e., FastSpeech, and LPCNet neural vocoder to design a new cross-lingual VC framework named FastSpeech-VC. We address the mismatches of the phonetic set and the speech prosody by applying Phonetic PosteriorGrams (PPGs), which have been proved to bridge across speaker and language boundaries. Moreover, we add normalized logarithm-scale fundamental frequency (Log-F0) to further compensate for the prosodic mismatches and significantly improve naturalness. Our experiments on English and Mandarin languages demonstrate that with only mono-lingual corpus, the proposed FastSpeech-VC can achieve high quality converted speech with mean opinion score (MOS) close to the professional records while maintaining good speaker similarity. Compared to the baselines using Tacotron2 and Transformer TTS models, the FastSpeech-VC can achieve controllable converted speech rate and much faster inference speed. More importantly, the FastSpeech-VC can easily be adapted to a speaker with limited training utterances.
翻译:跨语言语音转换(VC)是一个重要且具有挑战性的问题,因为语音组合和不同语言的语音流动严重不匹配。在本文中,我们利用神经文本到语音模型(TTS)模型(即FastSpeech)和LPCNet神经读数器来设计一个新的跨语言VC框架(名为FastSpeech-VC)。我们通过应用语音组合和语音流动(PPG)来解决语音组合和语音流动的不匹配问题,这已被证明跨越了语种和语言的界限。此外,我们增加了正常的对数比例基本频率(Log-F0),以进一步弥补标语的不匹配和显著改善自然性。我们在英语和曼达林语上的实验表明,只有单语组合,拟议中的快速语音-VC能够实现高质量的转换演讲,与专业记录非常接近,同时保持良好的演讲者评分。与使用Tacocron2和变换式语音模型的基线相比,能够快速地实现快速的语音和变换速度。