We address the problem of cross-speaker style transfer for text-to-speech (TTS) using data augmentation via voice conversion. We assume to have a corpus of neutral non-expressive data from a target speaker and supporting conversational expressive data from different speakers. Our goal is to build a TTS system that is expressive, while retaining the target speaker's identity. The proposed approach relies on voice conversion to first generate high-quality data from the set of supporting expressive speakers. The voice converted data is then pooled with natural data from the target speaker and used to train a single-speaker multi-style TTS system. We provide evidence that this approach is efficient, flexible, and scalable. The method is evaluated using one or more supporting speakers, as well as a variable amount of supporting data. We further provide evidence that this approach allows some controllability of speaking style, when using multiple supporting speakers. We conclude by scaling our proposed technology to a set of 14 speakers across 7 languages. Results indicate that our technology consistently improves synthetic samples in terms of style similarity, while retaining the target speaker's identity.
翻译:我们通过语音转换处理使用数据扩增的文本到语音(TTS)的跨语音风格传输问题。我们假设从目标发言者那里获得一系列中性非表达性数据,支持不同发言者的谈话表达性数据。我们的目标是在保留目标发言者身份的同时,建立一个表达性的TTS系统。我们建议的方法依靠语音转换,首先从一组辅助表达式发言者产生高质量的数据。然后,语音转换数据与目标发言者的自然数据相结合,用于培训单一发言者多式TTS系统。我们提供证据证明,这一方法效率高、灵活且可缩放。我们用一个或多个辅助发言者来评价该方法,以及一个可变的支持性数据。我们进一步提供证据,证明这种方法允许在使用多个辅助发言者时使用某种语音风格的可控性。我们最后将我们的拟议技术推广到7种语言的一组14位发言者。结果表明,我们的技术在风格相似性方面不断改进合成样本,同时保留目标发言者的身份。