Exploiting rich linguistic information in raw text is crucial for expressive text-to-speech (TTS). As large scale pre-trained text representation develops, bidirectional encoder representations from Transformers (BERT) has been proven to embody semantic information and employed to TTS recently. However, original or simply fine-tuned BERT embeddings still cannot provide sufficient semantic knowledge that expressive TTS models should take into account. In this paper, we propose a word-level semantic representation enhancing method based on dependency structure and pre-trained BERT embedding. The BERT embedding of each word is reprocessed considering its specific dependencies and related words in the sentence, to generate more effective semantic representation for TTS. To better utilize the dependency structure, relational gated graph network (RGGN) is introduced to make semantic information flow and aggregate through the dependency structure. The experimental results show that the proposed method can further improve the naturalness and expressiveness of synthesized speeches on both Mandarin and English datasets.
翻译:开发原始文本中的丰富的语言信息对于表达文字到语音(TTS)至关重要。随着经过培训的文本表述的大规模前程发展,来自变异器(BERT)的双向编码器演示已被证明含有语义信息,最近被TTS采用。然而,原始或简单的经过微调的BERT嵌入器仍然不能提供足够的语义知识,说明表达TTS模型应该加以考虑。在本文中,我们提议一个基于依赖结构和经过培训的BERT嵌入的字级语表达法,加强词级语义表达法。BERT嵌入的每个词都经过重新处理,考虑到其具体依赖性和相关词句子,为TTS生成更有效的语义表达法。为了更好地利用依赖性结构,引入了关系门式图形网络(RGGN),以便通过依赖性结构使语义信息流动和汇总。实验结果表明,拟议的方法可以进一步提高曼达林和英语数据集综合演讲的自然性和表达性。