Neural sequence-to-sequence TTS has achieved significantly better output quality than statistical speech synthesis using HMMs. However, neural TTS is generally not probabilistic and uses non-monotonic attention. Attention failures increase training time and can make synthesis babble incoherently. This paper describes how the old and new paradigms can be combined to obtain the advantages of both worlds, by replacing attention in neural TTS with an autoregressive left-right no-skip hidden Markov model defined by a neural network. Based on this proposal, we modify Tacotron 2 to obtain an HMM-based neural TTS model with monotonic alignment, trained to maximise the full sequence likelihood without approximation. We also describe how to combine ideas from classical and contemporary TTS for best results. The resulting example system is smaller and simpler than Tacotron 2, and learns to speak with fewer iterations and less data, whilst achieving comparable naturalness prior to the post-net. Our approach also allows easy control over speaking rate.
翻译:神经序列到序列 TTS 与使用 HMMs 的统计语言合成相比,取得了显著更好的产出质量。 但是, 神经 TTS 通常不是概率性的,而是使用非分子式的注意。 注意失败会增加培训时间, 并且可以使合成相交不相容。 本文描述了如何将旧的和新的模式结合起来, 以神经网络定义的自动递减的左右不斯基普隐藏的Markov 模型取代神经 TTS 中的注意力, 从而获得神经2 。 基于此提议, 我们修改Tacotron 2, 以获得一个基于 HMM 的神经 TTS 模型, 以获得单调匹配, 训练如何将全序的可能性最大化而无需接近。 我们还描述了如何将传统 TTS 和当代 TTS 的概念结合起来, 以取得最佳效果。 由此产生的示例系统比 Tacotron 2 系统要小, 简单, 并学会用较少的迭代和较少的数据说话, 同时在后联网之前取得可比的自然性。 我们的方法也便于控制发言速度 。