This paper proposes Virtuoso, a massively multilingual speech-text joint semi-supervised learning framework for text-to-speech synthesis (TTS) models. Existing multilingual TTS typically supports tens of languages, which are a small fraction of the thousands of languages in the world. One difficulty to scale multilingual TTS to hundreds of languages is collecting high-quality speech-text paired data in low-resource languages. This study extends Maestro, a speech-text joint pretraining framework for automatic speech recognition (ASR), to speech generation tasks. To train a TTS model from various types of speech and text data, different training schemes are designed to handle supervised (paired TTS and ASR data) and unsupervised (untranscribed speech and unspoken text) datasets. Experimental evaluation shows that 1) multilingual TTS models trained on Virtuoso can achieve significantly better naturalness and intelligibility than baseline ones in seen languages, and 2) they can synthesize reasonably intelligible and naturally sounding speech for unseen languages where no high-quality paired TTS data is available.
翻译:本文建议Virtuoso, 这是一个大规模多语种语言-文字联合半监督的半语言综合合成(TTS)模型。现有的多语种TTS通常支持几十种语言,这些语言占全世界千种语言的一小部分。将多语种TTS推广到数百种语言的一个困难是收集以低资源语言提供的高质量语音文本配对数据。这项研究将Madero,一个语音-文字自动语音识别(ASR)联合培训前培训框架扩大到语言生成任务。为了从各种语言和文本数据中培训TTS模型,设计了不同的培训计划,处理监管(Paireed TTS和ASR数据)和不受监督(未受限制的语音和未开口的文本)数据集。实验性评估表明:(1) 接受过有关Virtusoso的多语种TTS模型的自然性和可大大优于所见语言的基线数据,2 它们可以合成在没有高质量TTTTS数据的地方的可理解性和自然感知性语言。</s>