Labeled audio data is insufficient to build satisfying speech recognition systems for most of the languages in the world. There have been some zero-resource methods trying to perform phoneme or word-level speech recognition without labeled audio data of the target language, but the error rate of these methods is usually too high to be applied in real-world scenarios. Recently, the representation ability of self-supervise pre-trained models has been found to be extremely beneficial in zero-resource phoneme recognition. As far as we are concerned, this paper is the first attempt to extend the use of pre-trained models into word-level zero-resource speech recognition. This is done by fine-tuning the pre-trained models on IPA phoneme transcriptions and decoding with a language model trained on extra texts. Experiments on Wav2vec 2.0 and HuBERT models show that this method can achieve less than 20% word error rate on some languages, and the average error rate on 8 languages is 33.77%.
翻译:标签音频数据不足以为世界上大多数语言建立令人满意的语音识别系统。 有一些零资源方法试图在没有标注目标语言的音频数据的情况下进行语音或字级语音识别,但是这些方法的错误率通常太高,无法应用于现实世界情景。 最近发现自我监督的预先培训模型的代表性能力在零资源电话识别方面极为有益。 就我们而言,本文件是首次尝试将预先培训的模型的使用扩大到字级零资源语音识别。 这项工作是通过对IPA电话记录预培训模型进行微调并用经过额外文本培训的语言模型解码完成的。 Wav2vec 2. 0 和 HuBERT 模型实验显示,这种方法在某些语言上可以达到不到20%的字差率,8种语言的平均差率是33.77%。