Expressive text-to-speech (TTS) can synthesize a new speaking style by imiating prosody and timbre from a reference audio, which faces the following challenges: (1) The highly dynamic prosody information in the reference audio is difficult to extract, especially, when the reference audio contains background noise. (2) The TTS systems should have good generalization for unseen speaking styles. In this paper, we present a \textbf{no}ise-\textbf{r}obust \textbf{e}xpressive TTS model (NoreSpeech), which can robustly transfer speaking style in a noisy reference utterance to synthesized speech. Specifically, our NoreSpeech includes several components: (1) a novel DiffStyle module, which leverages powerful probabilistic denoising diffusion models to learn noise-agnostic speaking style features from a teacher model by knowledge distillation; (2) a VQ-VAE block, which maps the style features into a controllable quantized latent space for improving the generalization of style transfer; and (3) a straight-forward but effective parameter-free text-style alignment module, which enables NoreSpeech to transfer style to a textual input from a length-mismatched reference utterance. Experiments demonstrate that NoreSpeech is more effective than previous expressive TTS models in noise environments. Audio samples and code are available at: \href{http://dongchaoyang.top/NoreSpeech\_demo/}{http://dongchaoyang.top/NoreSpeech\_demo/}
翻译:表达式文本到语音( TTS) 可以通过一个参考音频的缩略式和缩略式来合成一个新的语音风格, 它将面临以下挑战:(1) 参考音频中高度动态的缩略信息很难提取, 特别是当参考音频包含背景噪音时。 (2) TTS 系统应该对隐性语音风格有很好的概括化。 在本文中, 我们提出了一个\ textbf{ no}\\ textb{r} obust\ textbf{ obust\ textf{e}xpressive TTS 模型( NoreSpeech), 它可以在音响音响的引用表达式表达式中强有力地将语音风格转换到合成的语音表达式语音表达式。 具体地说, 我们的NoreSpeech 包括了几个元素:(1) 新的DiffStyle 模块, 它利用一个强大的振动性分解式表达式发音模式, 通过知识蒸馏来学习噪音- nonnonoticre 语音表达风格风格风格风格风格风格的文本演示。