Developing Automatic Speech Recognition (ASR) for low-resource languages is a challenge due to the small amount of transcribed audio data. For many such languages, audio and text are available separately, but not audio with transcriptions. Using text, speech can be synthetically produced via text-to-speech (TTS) systems. However, many low-resource languages do not have quality TTS systems either. We propose an alternative: produce synthetic audio by running text from the target language through a trained TTS system for a higher-resource pivot language. We investigate when and how this technique is most effective in low-resource settings. In our experiments, using several thousand synthetic TTS text-speech pairs and duplicating authentic data to balance yields optimal results. Our findings suggest that searching over a set of candidate pivot languages can lead to marginal improvements and that, surprisingly, ASR performance can by harmed by increases in measured TTS quality. Application of these findings improves ASR by 64.5\% and 45.0\% character error reduction rate (CERR) respectively for two low-resource languages: Guaran\'i and Suba.
翻译:开发低资源语言的自动语音识别(ASR)是一项挑战,因为转录的音频数据数量很少。对于许多这类语言,音频和文本是分开提供的,而不是带抄录的音频。使用文本,可以合成地通过文本到语音系统生成语言。然而,许多低资源语言也没有高质量的TTTS系统。我们建议了一种替代办法:通过经过培训的TTS系统运行目标语言的合成音频文本,用于一种高资源节流语言。我们调查这一技术在低资源环境中何时和如何最为有效。在我们的实验中,使用数千个合成 TTS 文本语音配对和复制真实数据以平衡最佳结果。我们的研究结果表明,对一组候选Pivot语言的搜索可能导致边际改进,令人惊讶的是,ASR的性能可能因测量TS质量的提高而受到损害。应用这些发现使ASR在两种低资源语言:Guarani\'i和Suba分别改善了ARC的ASR64.5和45.0 ⁇ 字符错误率。