Neural sequence-to-sequence TTS has demonstrated significantly better output quality over classical statistical parametric speech synthesis using HMMs. However, the new paradigm is not probabilistic and the use of non-monotonic attention both increases training time and introduces "babbling" failure modes that are unacceptable in production. In this paper, we demonstrate that the old and new paradigms can be combined to obtain the advantages of both worlds, by replacing the attention in Tacotron 2 with an autoregressive left-right no-skip hidden Markov model defined by a neural network. This leads to an HMM-based neural TTS model with monotonic alignment, trained to maximise the full sequence likelihood without approximations. We discuss how to combine innovations from both classical and contemporary TTS for best results. The final system is smaller and simpler than Tacotron 2 and learns to align and speak with fewer iterations, whilst achieving the same naturalness prior to the post-net. Our system also allows easy control over speaking rate. Audio examples and code are available at https://shivammehta007.github.io/Neural-HMM/
翻译:神经序列到序列 TTS 与 使用 HMMs 的典型的统计参数语言合成相比,显示了显著更好的产出质量。 但是,新的范式并不具有概率性,使用非分子关注的方式既增加了培训时间,又引入了生产中不可接受的“泡泡”失败模式。在本文中,我们证明旧的和新的范式可以结合到两个世界的优势中,通过将Tacotron 2 的注意力替换成一个由神经网络定义的自动回归式左右翼无斯基普隐藏的Markov 模型。这导致一个基于 HMM 的神经 TTS 模型, 具有单调调调, 受过培训, 使全序列的可能性最大化而无近似。 我们讨论如何将传统 TTS 和当代 TTS 的创新结合起来, 取得最佳效果。 最后的系统比 Tacotron 2 更小、更简单, 学会与 Iterations 相匹配和说话, 同时在后联网之前达到同样的自然性。我们的系统还可以轻松控制发言率。 https://shivammta.githual-Hural 。