Recently, there has been an increasing interest in neural speech synthesis. While the deep neural network achieves the state-of-the-art result in text-to-speech (TTS) tasks, how to generate a more emotional and more expressive speech is becoming a new challenge to researchers due to the scarcity of high-quality emotion speech dataset and the lack of advanced emotional TTS model. In this paper, we first briefly introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation. After that, we propose a simple but efficient architecture for emotional speech synthesis called EMSpeech. Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding. In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations. Finally, by showing a comparable performance in the emotional speech synthesis task, we successfully demonstrate the ability of the proposed model.
翻译:最近,人们越来越关注神经语音合成。虽然深层神经网络实现了最先进的文字语音合成(TTS)任务,但由于缺少高质量的情感语音数据集和缺乏先进的情感 TTS 模型,如何产生更感性、更能表达的语音正在成为研究人员的新挑战。在本文中,我们首先简短地介绍并公开发布一个普通情感语音数据集,包括9,724个带音频文件的样本及其情感标签的人文注释。之后,我们提出了一个简单而有效的情感语音合成结构,称为EMSpeech。不同于那些需要更多参考音频作为投入的模型,我们的模型可以预测仅仅来自输入文本的情感标签,产生以情感嵌入为条件的更清晰的语音。在实验阶段,我们首先通过情感分类任务来验证我们数据集的有效性。然后我们用拟议的数据集来培训我们的模型,并进行一系列的主观评价。最后,通过展示情感语音合成任务的类似性表现,我们成功地展示了拟议模型的能力。