Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
翻译:拼写纠错对于低资源语言(LRLs)仍是一个具有挑战性的问题。尽管预训练语言模型(PLMs)已被用于拼写纠错,但其应用仍局限于少数语言,且尚未有对PLMs进行系统比较的研究。本文首次针对包括低资源语言在内的PLMs在拼写纠错中的有效性进行了实证研究。我们发现,当微调数据集较大时,大型语言模型(LLMs)的表现优于其对应模型(编码器基与编码器-解码器结构)。这一观察结果即使对于LLM未进行预训练的语言也成立。我们发布了LMSpell,一个跨PLMs的易用拼写纠错工具包,其中包含一个评估函数,用于补偿LLMs可能产生的幻觉。此外,我们以僧伽罗语为例进行案例研究,以揭示低资源语言在拼写纠错方面面临的困境。