Recently, synthesizing personalized speech by text-to-speech (TTS) application is highly demanded. But the previous TTS models require a mass of target speaker speeches for training. It is a high-cost task, and hard to record lots of utterances from the target speaker. Data augmentation of the speeches is a solution but leads to the low-quality synthesis speech problem. Some multi-speaker TTS models are proposed to address the issue. But the quantity of utterances of each speaker imbalance leads to the voice similarity problem. We propose the Target Domain Adaptation Speech Synthesis Network (TDASS) to address these issues. Based on the backbone of the Tacotron2 model, which is the high-quality TTS model, TDASS introduces a self-interested classifier for reducing the non-target influence. Besides, a special gradient reversal layer with different operations for target and non-target is added to the classifier. We evaluate the model on a Chinese speech corpus, the experiments show the proposed method outperforms the baseline method in terms of voice quality and voice similarity.
翻译:最近,通过文本到语音(TTS)应用合成个人化语言的要求很高。 但是,以前的 TTS 模型需要大量的目标演讲者演讲来进行培训。 这是一个高成本的任务,很难记录目标演讲者的许多话语。 增强演讲者的数据是一个解决方案,但会导致低质量综合语言问题。 提出了一些多语言 TTS 模型来解决这个问题。 但是,每个演讲者言论不平衡的数量导致声音相似问题。 我们建议了目标域适应语言合成网络(TDASS)来解决这些问题。 基于Tacotron2模型的骨干,即高质量的 TTTS 模型, TDASS 引入了一种自感兴趣的分类器来减少非目标影响。 此外, 在分类器中增加了一个特殊的梯度反向层, 其目标和非目标操作不同。 我们评估了中国语言资料库中的模型, 实验显示拟议方法在声音质量和声音相似性方面超过了基线方法。