Named Entity Recognition (NER) is a fundamental task in natural language processing that involves identifying and classifying named entities in text. But much work hasn't been done for complex named entity recognition in Bangla, despite being the seventh most spoken language globally. CNER is a more challenging task than traditional NER as it involves identifying and classifying complex and compound entities, which are not common in Bangla language. In this paper, we present the winning solution of Bangla Complex Named Entity Recognition Challenge - addressing the CNER task on BanglaCoNER dataset using two different approaches, namely Conditional Random Fields (CRF) and finetuning transformer based Deep Learning models such as BanglaBERT. The dataset consisted of 15300 sentences for training and 800 sentences for validation, in the .conll format. Exploratory Data Analysis (EDA) on the dataset revealed that the dataset had 7 different NER tags, with notable presence of English words, suggesting that the dataset is synthetic and likely a product of translation. We experimented with a variety of feature combinations including Part of Speech (POS) tags, word suffixes, Gazetteers, and cluster information from embeddings, while also finetuning the BanglaBERT (large) model for NER. We found that not all linguistic patterns are immediately apparent or even intuitive to humans, which is why Deep Learning based models has proved to be the more effective model in NLP, including CNER task. Our fine tuned BanglaBERT (large) model achieves an F1 Score of 0.79 on the validation set. Overall, our study highlights the importance of Bangla Complex Named Entity Recognition, particularly in the context of synthetic datasets. Our findings also demonstrate the efficacy of Deep Learning models such as BanglaBERT for NER in Bangla language.
翻译:命名实体识别(NER)是自然语言处理中的基本任务,它涉及在文本中识别和分类命名实体。但是,在全球范围内居第七的孟加拉语中,尚未对复杂命名实体识别进行大量研究。CNER是一项更具挑战性的任务,因为它涉及识别和分类复杂和复合实体,这在孟加拉语中并不常见。在本文中,我们提出了在BanglaCoNER数据集上针对CNER任务的孟加拉最佳解决方案,使用了两种不同的方法,即条件随机场(CRF)和fine-tuning的基于transformer的深度学习模型,如BanglaBERT。该数据集包括15300个用于训练的句子和800个用于验证的句子,格式为.conll。数据探索性分析(EDA)揭示了该数据集有7个不同的NER标签,并有不少英文单词,表明该数据集是合成的,很可能是翻译的产物。我们尝试了各种特征组合,包括词性(POS)标记、单词后缀、专业名单(gazetteers)和来自嵌入的集群信息,同时对用于NER的BanglaBERT(large)模型进行了微调。我们发现,并非所有的语言模式都是立即显然或甚至直观的,这就是为什么基于深度学习的模型在NLP中,包括CNER任务中,被证明是更有效的模型。我们微调的BanglaBERT(large)模型在验证集上达到了F1得分0.79。总的来说,我们的研究突出了孟加拉复杂命名实体识别的重要性,特别是在合成数据集的背景下。我们的发现还表明,诸如BanglaBERT之类的深度学习模型对于孟加拉语NER是有效的。