This paper focuses on how to take advantage of external relational knowledge to improve machine reading comprehension (MRC) with multi-task learning. Most of the traditional methods in MRC assume that the knowledge used to get the correct answer generally exists in the given documents. However, in real-world task, part of knowledge may not be mentioned and machines should be equipped with the ability to leverage external knowledge. In this paper, we integrate relational knowledge into MRC model for commonsense reasoning. Specifically, based on a pre-trained language model (LM). We design two auxiliary relation-aware tasks to predict if there exists any commonsense relation and what is the relation type between two words, in order to better model the interactions between document and candidate answer option. We conduct experiments on two multi-choice benchmark datasets: the SemEval-2018 Task 11 and the Cloze Story Test. The experimental results demonstrate the effectiveness of the proposed method, which achieves superior performance compared with the comparable baselines on both datasets.
翻译:本文侧重于如何利用外部关系知识来改进机读理解(MRC),并进行多任务学习。MRC的大多数传统方法都假定,在特定文件中一般都存在获得正确答案的知识。然而,在现实世界的任务中,部分知识可能没有被提及,机器应具备利用外部知识的能力。在本文中,我们将关系知识纳入MRC模型,以便进行常识推理。具体地说,根据预先培训的语言模型(LM),我们设计了两项辅助关系认知任务,以预测是否存在任何常识关系以及两个词之间的关系类型,以便更好地模拟文件与候选答案选项之间的互动。我们在两个多选择基准数据集上进行了实验:SemEval-2018任务11和Cloze Story Test。实验结果显示了拟议方法的有效性,与两个数据集的可比基线相比,该方法取得了优异的业绩。