Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. Existing cognitive science literature on defeasible reasoning suggests that a person forms a mental model of the problem scenario before answering questions. Our research goal asks whether neural models can similarly benefit from envisioning the question scenario before answering a defeasible query. Our approach is, given a question, to have a model first create a graph of relevant influences, and then leverage that graph as an additional input when answering the question. Our system, CURIOUS, achieves a new state-of-the-art on three different defeasible reasoning datasets. This result is significant as it illustrates that performance can be improved by guiding a system to "think about" a question and explicitly model the scenario, rather than answering reflexively. Code, data, and pre-trained models are located at https://github.com/madaan/thinkaboutit.
翻译:我们的研究目标询问神经模型能否在回答不可行的询问之前同样地受益于对问题假设的设想。我们的方法是一个问题,先用模型来创建相关影响图,然后在解答问题时利用该图作为补充投入。我们的系统CURIOUS在三种不同的不可行的推理数据集上取得了新的最新水平。这一结果很重要,因为它表明通过指导一个系统“思考”一个问题和明确模拟假设,而不是反向回答,可以改进性能。代码、数据和预先培训的模式位于https://github.com/madaan/ Thinkaboutit。