Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/.
翻译:(TTS) 语言模型(TTS) 的不偏向性文字(TTS),如快速语音,可以比先前的自动递减模型更快地合成语言。快速语音模型的训练取决于一个自动递减教师模型,用于持续预测(提供更多的信息作为输入)和知识蒸馏(简化产出的数据分配),这可以缓解TS中一对一的绘图问题(即,多个语音变换与相同的文本相对应)。然而,快速语音系统有几个缺点:1) 教师-学生蒸馏管道复杂且耗时,2 从2 教师模型提取的周期长度不够准确,而教师模型中提取的目标速递减教师模型的速递减教师模型则由于数据简化而蒙受信息损失,两者都限制了语音质量。我们提议快速语音2 快速语音2 解决问题,在快速语音模型中更好地解决一对一对一的绘图问题,1) 直接培训模型,而不是在教师的简化的语音分析中直接进行3级变速数据, 直接将快速的文本输入时间推算为快速语音数据,在2 将快速语音演示中进行更精确的信息推移。