This paper introduces WaveGrad 2, a non-autoregressive generative model for text-to-speech synthesis. WaveGrad 2 is trained to estimate the gradient of the log conditional density of the waveform given a phoneme sequence. The model takes an input phoneme sequence, and through an iterative refinement process, generates an audio waveform. This contrasts to the original WaveGrad vocoder which conditions on mel-spectrogram features, generated by a separate model. The iterative refinement process starts from Gaussian noise, and through a series of refinement steps (e.g., 50 steps), progressively recovers the audio sequence. WaveGrad 2 offers a natural way to trade-off between inference speed and sample quality, through adjusting the number of refinement steps. Experiments show that the model can generate high fidelity audio, approaching the performance of a state-of-the-art neural TTS system. We also report various ablation studies over different model configurations. Audio samples are available at https://wavegrad.github.io/v2.
翻译:本文介绍WaveGrad 2, 是一个用于文本到语音合成的非自动基因化模型。 WaveGrad 2, 是用来估计波形附条件密度的日志值梯度的训练。 该模型采用输入方言序列, 并通过迭代精细化程序生成音波形。 这与原WaveGrad vocoder形成对比, WaveGrad vocoder是该波谱特性的一个单独模型生成的条件。 迭代精炼过程从高山噪音开始, 并通过一系列改进步骤( 如, 50 步) 逐步恢复音频序列。 WaveGrad 2 通过调整精炼步骤的数量, 提供了在推断速度和样本质量之间进行交换的自然方法。 实验显示, 该模型能够产生高度忠诚的音频, 接近一个状态的神经TTS系统的性能。 我们还报告了不同模型配置的各种动动研究。 音样样本可在 https://wavergrad.github.io/v2 上查阅。