State-of-the-art Text-To-Speech (TTS) models are capable of producing high-quality speech. The generated speech, however, is usually neutral in emotional expression, whereas very often one would want fine-grained emotional control of words or phonemes. Although still challenging, the first TTS models have been recently proposed that are able to control voice by manually assigning emotion intensity. Unfortunately, due to the neglect of intra-class distance, the intensity differences are often unrecognizable. In this paper, we propose a fine-grained controllable emotional TTS, that considers both inter- and intra-class distances and be able to synthesize speech with recognizable intensity difference. Our subjective and objective experiments demonstrate that our model exceeds two state-of-the-art controllable TTS models for controllability, emotion expressiveness and naturalness.
翻译:最先进的文字语音模型(TTS)能够产生高质量的语言。但是,所产生的语言通常在情感表达中保持中性,而人们往往希望对语言或电话进行精细的情感控制。尽管仍然具有挑战性,但最近提出了第一批TTS模型,这些模型能够通过手动分配情绪强度来控制声音。不幸的是,由于对阶级内部距离的忽视,强度差异往往无法辨认。在本文中,我们提出了一种精细的可控制情感TTS,它既考虑到阶级之间的距离,也考虑到阶级内部的距离,能够将语言与可识别强度差异结合起来。我们的主观客观实验表明,我们的TTS模型超过了两种最先进的可控TTS模型,以控制性、情感表现性和自然性。</s>