Previous works on expressive speech synthesis focus on modelling the mono-scale style embedding from the current sentence or context, but the multi-scale nature of speaking style in human speech is neglected. In this paper, we propose a multi-scale speaking style modelling method to capture and predict multi-scale speaking style for improving the naturalness and expressiveness of synthetic speech. A multi-scale extractor is proposed to extract speaking style embeddings at three different levels from the ground-truth speech, and explicitly guide the training of a multi-scale style predictor based on hierarchical context information. Both objective and subjective evaluations on a Mandarin audiobooks dataset demonstrate that our proposed method can significantly improve the naturalness and expressiveness of the synthesized speech.
翻译:先前的表达式语音合成工作侧重于模拟从当前句子或语境中嵌入的单一尺度风格,但人类演讲风格的多尺度性质被忽视。在本文中,我们建议采用多尺度的语音风格建模方法,以捕捉和预测多尺度的语音风格,改善合成语言的自然性和表达性。建议采用多尺度的提取器,在与地面实况演讲不同的三个级别上提取语音风格嵌入,并明确指导基于等级背景信息的多尺度风格预测器的培训。 对普通话音库数据集的客观和主观评价都表明,我们所提议的方法可以显著改善合成语言的自然性和表达性。