Neural sequence-to-sequence TTS has achieved significantly better output quality than statistical speech synthesis using HMMs. However, neural TTS is generally not probabilistic and the use of non-monotonic attention both increases training time and introduces "babbling" failure modes that are unacceptable in production. This paper demonstrates that the old and new paradigms can be combined to obtain the advantages of both worlds, by replacing the attention in Tacotron 2 with an autoregressive left-right no-skip hidden Markov model defined by a neural network. This leads to an HMM-based neural TTS model with monotonic alignment, trained to maximise the full sequence likelihood without approximations. We discuss how to combine innovations from both classical and contemporary TTS for best results. The final system is smaller and simpler than Tacotron 2, and learns to speak with fewer iterations and less data, whilst achieving the same naturalness prior to the post-net. Unlike Tacotron 2, our system also allows easy control over speaking rate. Audio examples and code are available at https://shivammehta007.github.io/Neural-HMM/
翻译:神经序列到序列 TTS 与使用 HMMs 的统计语言合成相比,取得了显著更好的产出质量。 但是,神经 TTS 通常不是概率性的,使用非分子关注,既增加了培训时间,又引入了生产中无法接受的“泡泡”失败模式。 本文表明,旧的和新的范式可以通过将Tacotron 2 的注意力与两个世界的优势结合起来,用神经网络定义的自动回归左右右无skip隐藏的 Markov 模型来取代Tacotron 2 的注意力。 这导致一个基于 HMM 的神经 TTS 模型, 以单声调为主, 受过培训, 使全序列的可能性最大化, 而不近似。 我们讨论如何将传统 TTS 和当代 TTS 的创新结合起来, 以取得最佳效果。 最后的系统比 Tacotron 2 更小, 更简单, 并学习如何在后联网之前实现同样的自然性。 与 Tacotron 2 不同, 我们的系统也便于控制发言率。 在 http://shimmet/ hural / hural 中可以找到音示例示例示例和代码。