Recently, end-to-end trained models for multiple-choice commonsense question answering (QA) have delivered promising results. However, such question-answering systems cannot be directly applied in real-world scenarios where answer candidates are not provided. Hence, a new benchmark challenge set for open-ended commonsense reasoning (OpenCSR) has been recently released, which contains natural science questions without any predefined choices. On the OpenCSR challenge set, many questions require implicit multi-hop reasoning and have a large decision space, reflecting the difficult nature of this task. Existing work on OpenCSR sorely focuses on improving the retrieval process, which extracts relevant factual sentences from a textual knowledge base, leaving the important and non-trivial reasoning task outside the scope. In this work, we extend the scope to include a reasoner that constructs a question-dependent open knowledge graph based on retrieved supporting facts and employs a sequential subgraph reasoning process to predict the answer. The subgraph can be seen as a concise and compact graphical explanation of the prediction. Experiments on two OpenCSR datasets show that the proposed model achieves great performance on benchmark OpenCSR datasets.
翻译:近年来,针对多项选择的常识问答(QA)的端到端训练模型已经取得了令人充满期望的结果。然而,这种问答系统无法直接应用于没有预定义选项的现实场景。因此,最近发布了一个新的基准挑战赛OpenCSR,其中包含没有预定义选项的自然科学问题。在OpenCSR挑战集中,许多问题需要隐式的多跳推理和具有大决策空间,反映了这个任务的艰巨性质。现有的OpenCSR工作仅关注于改进检索过程,该过程从文本知识库中提取相关事实句子,将重要而非琐碎的推理任务排除在外。在本文中,我们扩展了范围,以包括一种推理器,该推理器根据检索到的支持性事实构建一个问题相关的开放知识图,并采用顺序子图推理过程来预测答案。子图可以被视为预测的简明和紧凑的图形解释。在两个OpenCSR数据集上的实验结果表明,所提出的模型在基准OpenCSR数据集上取得了很好的性能。