The development of natural language processing (NLP) in general and machine reading comprehension in particular has attracted the great attention of the research community. In recent years, there are a few datasets for machine reading comprehension tasks in Vietnamese with large sizes, such as UIT-ViQuAD and UIT-ViNewsQA. However, the datasets are not diverse in answers to serve the research. In this paper, we introduce UIT-ViWikiQA, the first dataset for evaluating sentence extraction-based machine reading comprehension in the Vietnamese language. The UIT-ViWikiQA dataset is converted from the UIT-ViQuAD dataset, consisting of comprises 23.074 question-answers based on 5.109 passages of 174 Wikipedia Vietnamese articles. We propose a conversion algorithm to create the dataset for sentence extraction-based machine reading comprehension and three types of approaches for sentence extraction-based machine reading comprehension in Vietnamese. Our experiments show that the best machine model is XLM-R_Large, which achieves an exact match (EM) of 85.97% and an F1-score of 88.77% on our dataset. Besides, we analyze experimental results in terms of the question type in Vietnamese and the effect of context on the performance of the MRC models, thereby showing the challenges from the UIT-ViWikiQA dataset that we propose to the language processing community.
翻译:自然语言处理(NLP)的总体发展,特别是机器阅读理解的开发,引起了研究界的极大关注。近年来,越南有几套大型机器阅读任务(如UIT-ViQuAD和UIT-ViNewsQA)的计算机阅读理解任务数据集。然而,在为研究服务的答案方面,数据集并不存在多样性。在本文中,我们引入了UIT-ViWikiQA,即用于评价越南语言中刑罚提取机阅读理解的第一个数据集。UIT-ViWikiQA数据集从UIT-ViWikiQA数据集转换成由23.074个问题解答组成的数据数据集,这些解答基于越南174维基百科文章的5109个段落。我们建议了一种转换算法,为越南语提取机阅读数据集创建数据集数据集,以及三种刑罚提取机读理解方法。我们的实验显示,最好的机器模型是 XLM-R_Large, 其精确匹配(我们)85.97%,以及88-F1-Sc-RiA 数据序列 的答案,从越南模型的实验效果显示,从我们对MIT的模型的实验结果的版本。