Although neural text-to-speech (TTS) models have attracted a lot of attention and succeeded in generating human-like speech, there is still room for improvements to its naturalness and architectural efficiency. In this work, we propose a novel non-autoregressive TTS model, namely Diff-TTS, which achieves highly natural and efficient speech synthesis. Given the text, Diff-TTS exploits a denoising diffusion framework to transform the noise signal into a mel-spectrogram via diffusion time steps. In order to learn the mel-spectrogram distribution conditioned on the text, we present a likelihood-based optimization method for TTS. Furthermore, to boost up the inference speed, we leverage the accelerated sampling method that allows Diff-TTS to generate raw waveforms much faster without significantly degrading perceptual quality. Through experiments, we verified that Diff-TTS generates 28 times faster than the real-time with a single NVIDIA 2080Ti GPU.
翻译:虽然神经文本到语音模型吸引了大量注意力,并成功地生成了类似人类的言语,但仍然有改进自然性和建筑效率的空间。在这项工作中,我们提议了一个新型的非侵略性TTS模型,即Diff-TTS模型,该模型实现了高度自然和有效的语音合成。根据该文本,Diff-TTS利用一个去除扩散框架,通过扩散时间步骤将噪声信号转换成Mel-spectrogrogram。为了学习以文字为条件的Mel-spectrogro分布,我们为TS提供了一种基于概率的优化方法。此外,为了加快推导速度,我们利用加速采样方法使Diff-TTS生成原始波形速度大得多,而不会显著降低感官质量。通过实验,我们核实Diff-TTS生成的速率比实时速度快28倍,使用一个单一的NVIDIA 2080Ti GPU。