We study the effect of applying a language model (LM) on the output of Automatic Speech Recognition (ASR) systems for Indic languages. We fine-tune wav2vec $2.0$ models for $18$ Indic languages and adjust the results with language models trained on text derived from a variety of sources. Our findings demonstrate that the average Character Error Rate (CER) decreases by over $28$ \% and the average Word Error Rate (WER) decreases by about $36$ \% after decoding with LM. We show that a large LM may not provide a substantial improvement as compared to a diverse one. We also demonstrate that high quality transcriptions can be obtained on domain-specific data without retraining the ASR model and show results on biomedical domain.
翻译:我们研究了对印度语自动语音识别(ASR)系统产出应用语言模型的影响。我们用18美元的印度语微调tune wav2vec $2.0美元模型,并用经过培训的从各种来源获得的文本语言模型调整结果。我们的研究结果表明,在与印度语自动语音识别(ASR)解码后,平均性格错误率(CER)下降28美元以上,单词错误率(WER)下降约36美元。我们表明,与多种语言相比,大型LM可能无法提供实质性的改进。我们还表明,在不再培训ASR模型和显示生物医学领域结果的情况下,可以在特定领域数据上获得高质量的抄录。