Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained language models to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.
翻译:以前有关学习阿拉伯语音素标记的大部分工作都依靠从头开始训练模型。在本文中,我们探讨如何利用预训练的语言模型来学习音素标记。我们微调预训练的无标记多语言模型(ByT5)以学习在阿拉伯文本中预测和插入缺失的音素标记,这是一个复杂的任务,需要理解句子语义和标记的形态结构。我们展示了我们可以在最少的训练和没有功能工程的情况下实现阿拉伯文本音素标记任务的最新技术水平,将WER降低了40%。我们发布了我们的微调模型,以更好地造福于研究者在社区中。