In recent years, low-resource Machine Reading Comprehension (MRC) has made significant progress, with models getting remarkable performance on various language datasets. However, none of these models have been customized for the Urdu language. This work explores the semi-automated creation of the Urdu Question Answering Dataset (UQuAD1.0) by combining machine-translated SQuAD with human-generated samples derived from Wikipedia articles and Urdu RC worksheets from Cambridge O-level books. UQuAD1.0 is a large-scale Urdu dataset intended for extractive machine reading comprehension tasks consisting of 49k question Answers pairs in question, passage, and answer format. In UQuAD1.0, 45000 pairs of QA were generated by machine translation of the original SQuAD1.0 and approximately 4000 pairs via crowdsourcing. In this study, we used two types of MRC models: rule-based baseline and advanced Transformer-based models. However, we have discovered that the latter outperforms the others; thus, we have decided to concentrate solely on Transformer-based architectures. Using XLMRoBERTa and multi-lingual BERT, we acquire an F1 score of 0.66 and 0.63, respectively.
翻译:近年来,低资源机器阅读理解(MRC)取得了显著进展,模型在各种语言数据集上取得了显著的成绩,但这些模型中没有一个是针对乌尔都语定制的。这项工作探索了乌尔都问题解答数据集(UQAD1.0)的半自动创建,将机器翻译的SQuAD与来自Wikipeb 文章和来自剑桥O级书籍的Urdu RC 工作单的人类生成样本结合起来。UQuAD1.0是一个大型的乌尔都数据集,旨在用于采掘机器阅读理解任务,包括49k问题解答对、问题解答和回答格式。在UQuAD1.0中,45000对QA是用原始SQUAD1.0和大约4000对配对的机器翻译生成的。在这项研究中,我们使用了两种MRC模型:基于规则的基线和先进的变压模型。然而,我们发现后两种模型优于其他模型;因此,我们决定只侧重于以变压器为基础的结构,1,即使用0.6和0.6BROTER的多级结构。