Neural sequence-to-sequence TTS has demonstrated significantly better output quality over classical statistical parametric speech synthesis using HMMs. However, the new paradigm is not probabilistic and the use of non-monotonic attention both increases training time and introduces "babbling" failure modes that are unacceptable in production. In this paper, we demonstrate that the old and new paradigms can be combined to obtain the advantages of both worlds, by replacing the attention in Tacotron 2 with an autoregressive left-right no-skip hidden-Markov model defined by a neural network. This leads to an HMM-based neural TTS model with monotonic alignment, trained to maximise the full sequence likelihood without approximations. We discuss how to combine innovations from both classical and contemporary TTS for best results. The final system is smaller and simpler than Tacotron 2 and learns to align and speak with fewer iterations, while achieving the same speech naturalness. Unlike Tacotron 2, it also allows easy control over speaking rate. Audio examples and code are available at https://shivammehta007.github.io/Neural-HMM/
翻译:神经序列到序列 TTS 与 使用 HMMs 的典型的统计参数语言合成相比,显示了显著更好的产出质量。 但是,新的范式并不具有概率性,使用非分子注意力既增加了培训时间,又引入了生产中无法接受的“泡泡”失败模式。在本文中,我们证明旧的和新的范式可以把Tacotron 2的注意力结合起来,以获得两个世界的优势,办法是用神经网络定义的自动回归左右右无斯基普隐藏马科夫模式取代对TC2的注意力。这导致一个基于HMM的神经TS模型,具有单调调调,受过培训,可以使全序列的可能性最大化,而没有近似。我们讨论如何将传统TTS和当代TS的创新结合起来,以取得最佳结果。最后的系统比Tacotron 2 更小,更简单,并学会以较少的迭代词法,同时实现相同的语言自然性。Tacotron 2 也便于控制说话速度。在 https://shivammet.githubio/Hural/ 中可以找到音象示例和代码。