Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://speechresearch.github.io/naturalspeech2.
翻译:将文本转语音(TTS)扩展到大规模、多说话人和在野数据集是重要的,以捕捉人类语音的多样性,例如说话者身份、语调和风格(例如唱歌)。目前的大型TTS系统通常将语音量化为离散标记,并使用语言模型逐个生成这些标记,这些模型存在不稳定的语调、跳跃/重复单词的问题和较差的音质。在本文中,我们开发了自然之声2,这是一种TTS系统,利用神经音频编解码器以残差向量量化器获取量化的潜在向量,并使用扩散模型生成这些潜在向量,这些潜在向量与文本输入条件有关。为了增强零样本能力,这种能力对于实现多样化语音合成至关重要,我们设计了一个语音提示机制,以促进扩散模型和持续/音高预测器的上下文学习。我们将自然之声2扩展到44K小时的语音和唱歌数据集,并在未曾见过的说话者上评估其音质。自然之声2在零样本设置中的韵律/音色相似度、稳健性和音质方面远远优于先前的TTS系统,并使用仅语音提示执行新颖的零样本唱歌合成。可以在https://speechresearch.github.io/naturalspeech2找到音频样本。