We present IBR, an Iterative Backward Reasoning model to solve the proof generation tasks on rule-based Question Answering (QA), where models are required to reason over a series of textual rules and facts to find out the related proof path and derive the final answer. We handle the limitations of existed works in two folds: 1) enhance the interpretability of reasoning procedures with detailed tracking, by predicting nodes and edges in the proof path iteratively backward from the question; 2) promote the efficiency and accuracy via reasoning on the elaborate representations of nodes and history paths, without any intermediate texts that may introduce external noise during proof generation. There are three main modules in IBR, QA and proof strategy prediction to obtain the answer and offer guidance for the following procedure; parent node prediction to determine a node in the existing proof that a new child node will link to; child node prediction to find out which new node will be added to the proof. Experiments on both synthetic and paraphrased datasets demonstrate that IBR has better in-domain performance as well as cross-domain transferability than several strong baselines. Our code and models are available at https://github.com/find-knowledge/IBR .
翻译:我们提出IBR, 这是一种基于规则的问答(QA)的循环向后推理模型, 用来解决基于规则的问答(QA) 的证明生成任务, 模型需要根据一系列文本规则和事实来解释, 以找到相关的证明路径和最终答案。 我们用两个折叠处理现有作品的局限性:1) 通过详细跟踪, 预测证据路径中的节点和边缘, 从而增强推理程序的可解释性, 从问题中反复反转;2) 通过对节点和历史路径的详细表述进行推理, 从而提高效率和准确性, 没有在证据生成过程中引入外部噪音的中间文本。 IBR、 QA 和验证战略预测有三个主要模块, 以获得答案并为以下程序提供指导; 父节点预测, 在现有证据中确定一个节点, 新子节点将连接; 儿童节点预测, 以找出证据中将添加哪些新节点。 合成和引文数据集的实验表明 IBR 的性表现更好, 以及交叉传输能力高于几个强有力的基线。 我们的代码和模型和 httpsurfin/ 。