State-of-the-art text-to-speech (TTS) systems require several hours of recorded speech data to generate high-quality synthetic speech. When using reduced amounts of training data, standard TTS models suffer from speech quality and intelligibility degradations, making training low-resource TTS systems problematic. In this paper, we propose a novel extremely low-resource TTS method called Voice Filter that uses as little as one minute of speech from a target speaker. It uses voice conversion (VC) as a post-processing module appended to a pre-existing high-quality TTS system and marks a conceptual shift in the existing TTS paradigm, framing the few-shot TTS problem as a VC task. Furthermore, we propose to use a duration-controllable TTS system to create a parallel speech corpus to facilitate the VC task. Results show that the Voice Filter outperforms state-of-the-art few-shot speech synthesis techniques in terms of objective and subjective metrics on one minute of speech on a diverse set of voices, while being competitive against a TTS model built on 30 times more data.
翻译:在使用数量较少的培训数据时,标准的TTS模型会出现语言质量和智能退化,使培训资源较少的TTS系统出现问题。在本文中,我们提议采用一种新型的极低资源TTS方法,即“语音过滤器”,使用目标演讲者仅一分钟的语音过滤器,将语音转换作为后处理模块,附于原有的高质量TTS系统,并标志着现有TTS模式的概念转变,将几发TTS问题定为VC任务。此外,我们提议使用一个可持续控制TTTS系统,以建立一个平行的语音组合,促进VC任务。结果显示,语音过滤器在一组不同声音的1分钟的演讲中,在客观和主观的衡量尺度上,超越了最先进的微小的语音合成技术,同时对建在30倍以上数据的TTS模型进行竞争。