Learning emotion embedding from reference audio is a straightforward approach for multi-emotion speech synthesis in encoder-decoder systems. But how to get better emotion embedding and how to inject it into TTS acoustic model more effectively are still under investigation. In this paper, we propose an innovative constraint to help VAE extract emotion embedding with better cluster cohesion. Besides, the obtained emotion embedding is used as query to aggregate latent representations of all encoder layers via attention. Moreover, the queries from encoder layers themselves are also helpful. Experiments prove the proposed methods can enhance the encoding of comprehensive syntactic and semantic information and produce more expressive emotional speech.
翻译:从参考音频中嵌入的学习情感是编码器解码器系统中多感性语音合成的一个直截了当的方法。 但是,如何更好地将情感嵌入和如何将其更有效地注入 TTS 音响模型仍在调查之中。 在本文中,我们提出了一个创新的制约因素,以帮助 VAE 提取情感嵌入和更好的集群凝聚力。此外,获得的情感嵌入被用作查询,通过关注来汇总所有编码器层的潜在表达。此外,编码器层本身的查询也是有益的。 实验证明拟议方法可以加强综合合成和语义信息的编码,并产生更清晰的情感表达。