Custom voice is to construct a personal speech synthesis system by adapting the source speech synthesis model to the target model through the target few recordings. The solution to constructing a custom voice is to combine an adaptive acoustic model with a robust vocoder. However, training a robust vocoder usually requires a multi-speaker dataset, which should include various age groups and various timbres, so that the trained vocoder can be used for unseen speakers. Collecting such a multi-speaker dataset is difficult, and the dataset distribution always has a mismatch with the distribution of the target speaker dataset. This paper proposes an adaptive vocoder for custom voice from another novel perspective to solve the above problems. The adaptive vocoder mainly uses a cross-domain consistency loss to solve the overfitting problem encountered by the GAN-based neural vocoder in the transfer learning of few-shot scenes. We construct two adaptive vocoders, AdaMelGAN and AdaHiFi-GAN. First, We pre-train the source vocoder model on AISHELL3 and CSMSC datasets, respectively. Then, fine-tune it on the internal dataset VXI-children with few adaptation data. The empirical results show that a high-quality custom voice system can be built by combining a adaptive acoustic model with a adaptive vocoder.
翻译:自定义声音是通过目标数记录,使源语言合成模型与目标模型相适应,从而构建个人语音合成系统。 构建自定义声音的解决方案是将适应性声学模型与强健的vocoder结合起来。 然而, 培训一个强大的vocoder通常需要多声频声音数据集, 该数据集应包括不同年龄组和各种色调, 以便经过培训的vocoder能够用于隐蔽的发言者。 收集这样的多声频合成数据集非常困难, 而数据集的分布总是与目标扬声器数据集的分布不匹配。 本文建议从另一个新角度为自定义声音配置一个适应性vocoder, 以解决上述问题。 适应性vocoder主要使用跨位一致性损失来解决基于 GAN 的 Neal vocoder 在传输学习少发场景时遇到的问题。 我们建造了两个适应性电读音数据集, AdaMelGAN 和 AdaHiFi-GAN 。 首先, 我们预先将AISELL3 和 CS-CS CAS CRED 的源电学数据质量 和SISIS- dustimal disal 分别显示一个高适应性数据, 和高适应性数据库。