We propose a novel training strategy for Tacotron-based text-to-speech (TTS) system to improve the expressiveness of speech. One of the key challenges in prosody modeling is the lack of reference that makes explicit modeling difficult. The proposed technique doesn't require prosody annotations from training data. It doesn't attempt to model prosody explicitly either, but rather encodes the association between input text and its prosody styles using a Tacotron-based TTS framework. Our proposed idea marks a departure from the style token paradigm where prosody is explicitly modeled by a bank of prosody embeddings. The proposed training strategy adopts a combination of two objective functions: 1) frame level reconstruction loss, that is calculated between the synthesized and target spectral features; 2) utterance level style reconstruction loss, that is calculated between the deep style features of synthesized and target speech. The proposed style reconstruction loss is formulated as a perceptual loss to ensure that utterance level speech style is taken into consideration during training. Experiments show that the proposed training strategy achieves remarkable performance and outperforms a state-of-the-art baseline in both naturalness and expressiveness. To our best knowledge, this is the first study to incorporate utterance level perceptual quality as a loss function into Tacotron training for improved expressiveness.
翻译:我们为基于Tacotron的文本到语音(TTS)系统提出了一个新颖的培训战略,以改善语言的表达性。在假造模型方面的主要挑战之一是缺乏参考,因此难以进行明确的建模。建议的技术不需要从培训数据中作出假手动的注释。它不试图用基于Tacotron的文本到语音(TTS)的框架来明确地模拟输入文本和假手动风格之间的联系,而是用一种基于Tacotron的TTTS框架来编码。我们提出的构想标志着一种与风格象征性模式的脱节,这种模式是代言式的模范,它是由一个代理嵌入库明确建模的。拟议的培训战略采用了两种目标功能的组合:(1) 框架级重建损失,这是在综合光谱特性和目标光谱特性之间计算出来的;(2) 发音级重建损失,这是在综合演讲和目标演讲的深度特点之间计算出来的。 拟议的风格损失是一种概念损失,以确保在培训中考虑到发声级的语音风格。 实验表明,拟议的培训战略取得了惊人的业绩和超越了我们的最佳质量的状态。