With the popularity of deep neural network, speech synthesis task has achieved significant improvements based on the end-to-end encoder-decoder framework in the recent days. More and more applications relying on speech synthesis technology have been widely used in our daily life. Robust speech synthesis model depends on high quality and customized data which needs lots of collecting efforts. It is worth investigating how to take advantage of low-quality and low resource voice data which can be easily obtained from the Internet for usage of synthesizing personalized voice. In this paper, the proposed end-to-end speech synthesis model uses both speaker embedding and noise representation as conditional inputs to model speaker and noise information respectively. Firstly, the speech synthesis model is pre-trained with both multi-speaker clean data and noisy augmented data; then the pre-trained model is adapted on noisy low-resource new speaker data; finally, by setting the clean speech condition, the model can synthesize the new speaker's clean voice. Experimental results show that the speech generated by the proposed approach has better subjective evaluation results than the method directly fine-tuning pre-trained multi-speaker speech synthesis model with denoised new speaker data.
翻译:随着深层神经网络的普及,语音合成任务近日来在终端到终端编码解码器框架的基础上取得了显著的改进,在日常生活中广泛使用了越来越多的依赖语音合成技术的应用。强有力的语音合成模型取决于高质量和定制的数据,需要大量收集工作;值得研究如何利用从互联网上容易获得的低质量和资源低语音数据,以综合个人化的声音。在本文中,拟议的终端到终端语音合成模型将演讲者嵌入和噪音代表分别用作对示范演讲者和噪音信息的有条件投入。首先,语音合成模型经过预先培训,同时使用多声音清洁数据和扩音器增强数据;然后,预先培训的模型适应于噪音低资源的新扬声器数据;最后,通过设定清洁的语音条件,该模型可以合成新发言者的清洁声音。实验结果显示,拟议方法产生的演讲比直接调整事先经过精细的多语音合成模型和新式的演讲者数据,更具有主观性评价效果。