Despite prosody is related to the linguistic information up to the discourse structure, most text-to-speech (TTS) systems only take into account that within each sentence, which makes it challenging when converting a paragraph of texts into natural and expressive speech. In this paper, we propose to use the text embeddings of the neighboring sentences to improve the prosody generation for each utterance of a paragraph in an end-to-end fashion without using any explicit prosody features. More specifically, cross-utterance (CU) context vectors, which are produced by an additional CU encoder based on the sentence embeddings extracted by a pre-trained BERT model, are used to augment the input of the Tacotron2 decoder. Two types of BERT embeddings are investigated, which leads to the use of different CU encoder structures. Experimental results on a Mandarin audiobook dataset and the LJ-Speech English audiobook dataset demonstrate the use of CU information can improve the naturalness and expressiveness of the synthesized speech. Subjective listening testing shows most of the participants prefer the voice generated using the CU encoder over that generated using standard Tacotron2. It is also found that the prosody can be controlled indirectly by changing the neighbouring sentences.
翻译:尽管语言信息与话语结构相关,但大多数文本到语音(TTS)系统仅考虑到每个句子内的语言信息,因此在将一段文字转换成自然和表达式讲话时会遇到挑战。在本文件中,我们提议使用邻接句的嵌入文字,改进每段语句的流出生成,而不必使用任何明确的流传特征。更具体地说,交叉通缩(CU)背景矢量(CU)信息是根据预先培训的BERT模型提取的加CU编码器生成的,用来增加Tacotron2解密器的投入。对两种类型的布尔特尔嵌入进行了调查,从而导致使用不同的CUC编码结构。曼达林音簿数据集和LJ-Speech英语音簿数据集的实验结果显示,CUB信息的使用能够提高合成语音的自然性和直观性。主观听觉测试显示,大多数受控制的参与者也选择使用间接生成的音频标准,即由TAC公司生成。