Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.
翻译:多关系问题回答是一项具有挑战性的任务,因为需要详细分析知识库中多重事实三重的问题和推理。在本文中,我们提出了一个名为“解释性理由网络”的新模式,它采用了可解释的、跳跃式的推理解答程序。模型动态地决定了每个跳跃应分析输入问题的哪一部分;预测了与当前分析结果相对应的关系;利用预测关系更新问题陈述和推理过程的状况;然后驱动下一轮推理。实验显示,我们的模型在两个数据集中产生了最新的最新结果。更有趣的是,该模型可以为推理分析和失败诊断提供可追溯和可观测的中间预测。