Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
翻译:拼写校正对于低资源语言(LRLs)而言仍是一个具有挑战性的问题。尽管预训练语言模型(PLMs)已被应用于拼写校正,但其使用仍局限于少数语言,且尚未有对各类PLMs的适当比较。我们首次对PLMs(包括低资源语言)在拼写校正中的有效性进行了实证研究。研究发现,当微调数据集规模较大时,大语言模型(LLMs)的表现优于其对应模型(基于编码器的模型及编码器-解码器模型)。这一观察结果即使对于LLMs未进行预训练的语言也成立。我们发布了LMSpell,一个跨PLMs的易用拼写校正工具包。该工具包包含一个评估函数,用于补偿LLMs可能产生的幻觉。此外,我们以僧伽罗语为例进行了案例研究,以揭示低资源语言在拼写校正方面所面临的困境。