AfriSenti-SemEval Shared Task 12 of SemEval-2023. The task aims to perform monolingual sentiment classification (sub-task A) for 12 African languages, multilingual sentiment classification (sub-task B), and zero-shot sentiment classification (task C). For sub-task A, we conducted experiments using classical machine learning classifiers, Afro-centric language models, and language-specific models. For task B, we fine-tuned multilingual pre-trained language models that support many of the languages in the task. For task C, we used we make use of a parameter-efficient Adapter approach that leverages monolingual texts in the target language for effective zero-shot transfer. Our findings suggest that using pre-trained Afro-centric language models improves performance for low-resource African languages. We also ran experiments using adapters for zero-shot tasks, and the results suggest that we can obtain promising results by using adapters with a limited amount of resources.
翻译:AfriSenti-SemEval2023共享任务12。该任务旨在为12种非洲语言进行单语情感分类(子任务A),多语言情感分类(子任务B)和零样本情感分类(任务C)。对于子任务A,我们使用传统的机器学习分类器、面向非洲的语言模型和语言特定模型进行实验。对于任务B,我们微调了支持任务中许多语言的多语言预训练语言模型。对于任务C,我们使用了节约参数的适配器方法,利用目标语言中的单语文本实现有效的零样本转移。我们的发现表明,使用预训练的面向非洲的语言模型可以提高低资源非洲语言的性能。我们还运行了使用适配器进行零样本任务的实验,结果表明,使用有限的资源可以通过适配器获得有前途的结果。