We propose Guided-TTS 2, a diffusion-based generative model for high-quality adaptive TTS using untranscribed data. Guided-TTS 2 combines a speaker-conditional diffusion model with a speaker-dependent phoneme classifier for adaptive text-to-speech. We train the speaker-conditional diffusion model on large-scale untranscribed datasets for a classifier-free guidance method and further fine-tune the diffusion model on the reference speech of the target speaker for adaptation, which only takes 40 seconds. We demonstrate that Guided-TTS 2 shows comparable performance to high-quality single-speaker TTS baselines in terms of speech quality and speaker similarity with only a ten-second untranscribed data. We further show that Guided-TTS 2 outperforms adaptive TTS baselines on multi-speaker datasets even with a zero-shot adaptation setting. Guided-TTS 2 can adapt to a wide range of voices only using untranscribed speech, which enables adaptive TTS with the voice of non-human characters such as Gollum in \textit{"The Lord of the Rings"}.
翻译:我们建议使用未注明的数据,对高质量的适应性TTS进行引导-TTS 2, 一种基于传播的遗传模型,用于使用未经注明的数据。 指导-TTS 2 将一个发声条件的扩音模型与一个有调适性文本到语音的发言者依赖电话分类器结合起来。 我们用无叙事指导方法,对大规模未注明的数据集进行扩音条件的推广模型进行培训,并进一步微调关于适应性应用目标发言人参考演讲的传播模型,该模型仅需40秒时间。 我们证明,指导-TTS 2 显示在语言质量和发言者相似性方面与高质量单发式TTS基线的可比较性能,只有10秒未注明的数据。 我们进一步显示,即使采用零发音调整制TTS 2, 也比多发式数据集的适应性TTS基线高。 指导-TS 2 只能使用无调调音, 使TTTTTT能够适应非人类人物的声音,如Golum in\ trext{Rings Lord}。