Recently, text-to-speech (TTS) models such as FastSpeech and ParaNet have been proposed to generate mel-spectrograms from text in parallel. Despite the advantage, the parallel TTS models cannot be trained without guidance from autoregressive TTS models as their external aligners. In this work, we propose Glow-TTS, a flow-based generative model for parallel TTS that does not require any external aligner. By combining the properties of flows and dynamic programming, the proposed model searches for the most probable monotonic alignment between text and the latent representation of speech on its own. We demonstrate that enforcing hard monotonic alignments enables robust TTS, which generalizes to long utterances, and employing generative flows enables fast, diverse, and controllable speech synthesis. Glow-TTS obtains an order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at synthesis with comparable speech quality. We further show that our model can be easily extended to a multi-speaker setting.
翻译:最近,有人提议文本到语音模型(TTS),如FastSpeech和ParaNet(TTS)等文本到磁谱模型(TTS)模型(TTS)模型(TTS)模型(TTS)模型(TTS)(TTS)模型,如FastSpeech和ParaNet(ParaNet)模型(TTSS)模型(TTS)模型(TTS)模型(TTS)模型(TTTS),如FastSpeech和ParaNet(ParaNet)模型(TTS)模型(TTSS)模型(TTS)模型(TTS)模型(TTS)模型(TTTS)模型(TTTS),例如FastSpeech(TS-TS)模型(TTTS-TS)模型(TTS-TTS(TS-TS),无需任何外部索引。通过将流到动态编程和动态编程和动态编程编程编程的特性的特性结合,拟议模型搜索模型(Prentaltron 2)的特性和动态编程,将文本和演讲的文字和语言本身的表达的表达的表达的表达的表达的表达的可能的表达方式进行最有可能的单调和表达式组合,我们进一步显示,我们进一步表明,我们可以很容易地把模型(TTTTTTTTTTTTTTTTTTTS)结合到一个多式的缩放式的设置的设置的设置的设置。我们进一步表明我们的模型(TTS)的设置的设置很容易。我们可以看到我们的模型可以很容易。