Named Entity Recognition (NER) plays a vital role in various Natural Language Processing tasks such as information retrieval, text classification, and question answering. However, NER can be challenging, especially in low-resource languages with limited annotated datasets and tools. This paper adds to the effort of addressing these challenges by introducing MphayaNER, the first Tshivenda NER corpus in the news domain. We establish NER baselines by \textit{fine-tuning} state-of-the-art models on MphayaNER. The study also explores zero-shot transfer between Tshivenda and other related Bantu languages, with chiShona and Kiswahili showing the best results. Augmenting MphayaNER with chiShona data was also found to improve model performance significantly. Both MphayaNER and the baseline models are made publicly available.
翻译:命名实体识别(NER)在自然语言处理的各种任务中都发挥着重要的作用,如信息检索、文本分类和问答。然而,在标注数据集和工具有限的低资源语言中,NER可能具有挑战性。本文通过在新闻领域介绍MphayaNER,引入了第一个Tshivenda NER语料库,继续努力解决这些挑战。我们在MphayaNER上通过\改进模型的\微调,建立了NER基线模型。本研究还探讨了Tshivenda和其他相关班图语言之间的零-shot转移,其中chiShona和Kiswahili显示了最佳的结果。还发现,将MphayaNER与chiShona数据相结合,可以显著提高模型性能。MphayaNER和基线模型都是公开可用的。