Whilst recent neural text-to-speech (TTS) approaches produce high-quality speech, they typically require a large amount of recordings from the target speaker. In previous work, a 3-step method was proposed to generate high-quality TTS while greatly reducing the amount of data required for training. However, we have observed a ceiling effect in the level of naturalness achievable for highly expressive voices when using this approach. In this paper, we present a method for building highly expressive TTS voices with as little as 15 minutes of speech data from the target speaker. Compared to the current state-of-the-art approach, our proposed improvements close the gap to recordings by 23.3% for naturalness of speech and by 16.3% for speaker similarity. Further, we match the naturalness and speaker similarity of a Tacotron2-based full-data (~10 hours) model using only 15 minutes of target speaker data, whereas with 30 minutes or more, we significantly outperform it. The following improvements are proposed: 1) changing from an autoregressive, attention-based TTS model to a non-autoregressive model replacing attention with an external duration model and 2) an additional Conditional Generative Adversarial Network (cGAN) based fine-tuning step.
翻译:虽然最近的神经文本到语音(TTS)方法产生高质量的语音,但它们通常需要目标演讲人的大量录音。在以往的工作中,提出了一种三步方法,以产生高质量的TTS,同时大大减少培训所需的数据数量。然而,我们观察到,在使用这种方法时,对高清晰度声音可以达到的自然水平产生了上限效应。在本文件中,我们提出了一种方法,用目标演讲人仅15分钟或更远的语音数据来建立高度直观的TTS声音。与目前最先进的TTS模型相比,我们提出的改进建议将差距缩小23.3%,使发言自然性缩小23.3%,使发言者相似性缩小16.3%。此外,我们把基于Tacoctron2-全数据(~10小时)模型的自然性和发言者相似性相匹配,仅使用15分钟的目标演讲数据,而30分钟或更远,我们大大超过它。我们提出的改进如下:1)从一个自自动递增关注的TTS模型改为一个以外向型系统模型取代以外部时间和微级网络取代注意力的不溯性模型。