We propose a novel training algorithm for a multi-speaker neural text-to-speech (TTS) model based on multi-task adversarial training. A conventional generative adversarial network (GAN)-based training algorithm significantly improves the quality of synthetic speech by reducing the statistical difference between natural and synthetic speech. However, the algorithm does not guarantee the generalization performance of the trained TTS model in synthesizing voices of unseen speakers who are not included in the training data. Our algorithm alternatively trains two deep neural networks: multi-task discriminator and multi-speaker neural TTS model (i.e., generator of GANs). The discriminator is trained not only to distinguish between natural and synthetic speech but also to verify the speaker of input speech is existent or non-existent (i.e., newly generated by interpolating seen speakers' embedding vectors). Meanwhile, the generator is trained to minimize the weighted sum of the speech reconstruction loss and adversarial loss for fooling the discriminator, which achieves high-quality multi-speaker TTS even if the target speaker is unseen. Experimental evaluation shows that our algorithm improves the quality of synthetic speech better than a conventional GANSpeech algorithm.
翻译:我们建议采用新型培训算法,用于基于多任务对抗性培训的多声音神经文字到声音模型(TTS),以多任务对抗性培训为基础。传统的基因对抗网络(GAN)基于培训算法,通过减少自然和合成语言之间的统计差异,大大提高合成语言的质量;然而,该算法并不能保证受过训练的TTS模型在综合未包括在培训数据中的隐蔽发言者的声音方面普遍发挥功能。我们的算法或者培训两个深层神经网络:多任务歧视者和多声音神经 TTS模型(即GANs的生成者)。歧视者不仅受过培训,可以区分自然和合成语言,而且还可以核实投入语言的演讲者是不存在的或不存在的(即,新由内插的发言者的嵌入矢量生成的)。与此同时,对发电机进行了培训,以尽量减少语言重建损失的加权和欺骗歧视者造成的对抗性损失,即使目标演讲者掌握了高品质的多发言人TTTS,即使合成语言的演算法比普通的演算法更好。