We introduce SPEAR-TTS, a multi-speaker text-to-speech (TTS) system that can be trained with minimal supervision. By combining two types of discrete speech representations, we cast TTS as a composition of two sequence-to-sequence tasks: from text to high-level semantic tokens (akin to "reading") and from semantic tokens to low-level acoustic tokens ("speaking"). Decoupling these two tasks enables training of the "speaking" module using abundant audio-only data, and unlocks the highly efficient combination of pretraining and backtranslation to reduce the need for parallel data when training the "reading" component. To control the speaker identity, we adopt example prompting, which allows SPEAR-TTS to generalize to unseen speakers using only a short sample of 3 seconds, without any explicit speaker representation or speaker-id labels. Our experiments demonstrate that SPEAR-TTS achieves a character error rate that is competitive with state-of-the-art methods using only 15 minutes of parallel data, while matching ground-truth speech in terms of naturalness and acoustic quality, as measured in subjective tests.
翻译:我们引入了多发语音文本到语音系统(SPEAR-TTS),该系统可以在最低限度的监督下接受培训。我们把两种互不相连的语音演示组合起来,把TTS作为两种顺序到顺序的任务组成:从文字到高级语义符号(类似“阅读”),从语义符号到低声标语(“说话”)。将这两项任务区分开来,能够用大量只听音数据对“说”模块进行培训,并解开预先培训和反译的高效组合,以减少在培训“阅读”部分时对平行数据的需求。为了控制演讲人的身份,我们采用了实例提示,使SPEAR-TTS能够仅使用短短的3秒钟的样本向看不见的演讲人进行普及,而没有任何明确的演讲人或语义标签。我们的实验表明,SPEAR-TTS达到一个与状态-艺术方法具有竞争力的性差错率率,仅使用15分钟的平行数据,同时在自然和声学测试中测量的主观性。