Cross-speaker style transfer is crucial to the applications of multi-style and expressive speech synthesis at scale. It does not require the target speakers to be experts in expressing all styles and to collect corresponding recordings for model training. However, the performances of existing style transfer methods are still far behind real application needs. The root causes are mainly twofold. Firstly, the style embedding extracted from single reference speech can hardly provide fine-grained and appropriate prosody information for arbitrary text to synthesize. Secondly, in these models the content/text, prosody, and speaker timbre are usually highly entangled, it's therefore not realistic to expect a satisfied result when freely combining these components, such as to transfer speaking style between speakers. In this paper, we propose a cross-speaker style transfer text-to-speech (TTS) model with explicit prosody bottleneck. The prosody bottleneck builds up the kernels accounting for speaking style robustly, and disentangles the prosody from content and speaker timbre, therefore guarantees high quality cross-speaker style transfer. Evaluation result shows the proposed method even achieves on-par performance with source speaker's speaker-dependent (SD) model in objective measurement of prosody, and significantly outperforms the cycle consistency and GMVAE-based baselines in objective and subjective evaluations.
翻译:跨语音风格的传输对于应用多式和表达式语音合成规模至关重要,它并不要求目标演讲者成为表达所有风格的专家,并收集相应的示范培训记录。然而,现有风格传输方法的性能仍然远远落后于实际应用需求。 根源主要有两个。 首先,从单一参考演讲中提取的风格嵌入很难提供精细和适当的手动信息,供任意合成文本。 其次,在这些模型中,内容/文本、流音和扬声器通常是高度纠缠在一起的,因此,在自由结合这些组成部分时期望满意的结果是不现实的,如在演讲者之间转换发言风格。 在本文中,我们提出跨语音风格传输方法的性能仍然远远落后于实际应用需求。从单一参考演讲中提取的文本到语音模式很难为任意的文本合成提供精细和适当的手动信息。 在内容和演讲者Timbreamble之间,因此保证高质量的跨式风格传输具有高品质的跨语者风格,因此,我们不现实地期望一个跨语言模式的衡量结果,在GPSD中, 目标模型中,以显著的排序衡量方法,从而实现了。