African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.
翻译:非洲语言在自然语言处理研究中严重缺乏覆盖多项自然语言处理任务的数据集。尽管有逐渐扩展到不同任务的单一语言特定数据集,但只有少数自然语言处理任务(例如命名实体识别和机器翻译)具有覆盖多种地理和类型多样的非洲语言的标准基准数据集。在本文中,我们开发了MasakhaNEWS——一个新的新闻主题分类基准数据集,涵盖了16种在非洲广泛使用的语言。我们通过训练经典机器学习模型和微调多个语言模型来评估基线模型。此外,我们还探讨了多种适用于零样本和少样本学习的针对语言模型的全面微调替代方法,例如跨语言参数高效微调(如MAD-X),模式利用训练(PET),提示语言模型(如ChatGPT)和无提示语句转换器微调(SetFit和Cohere Embedding API)。我们在零样本设置下的评估显示了在没有借助额外监督的情况下利用提示ChatGPT进行新闻主题分类的潜力,在低资源非洲语言中实现了70个F1点的平均表现。在少样本设置中,我们展示了仅以每个标签10个示例为基础,利用PET方法就可以实现比全监督训练更多的90\%(即86.0 F1点)的性能(92.6 F1点)。