Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box transformers. We present a middle ground between these two extremes: a compositional model reminiscent of neural module networks that can perform chained logical reasoning. This model first finds relevant sentences in the context and then chains them together using neural modules. Our model gives significant performance improvements (up to 29\% relative error reduction when comfibined with a reranker) on ROPES, a recently introduced complex reasoning dataset.
翻译:对文本的复杂推理要求理解和将自由形式上游和逻辑连接联系在一起。 先前的工作基本上尝试以象征性方式或黑箱变压器来做到这一点。 我们展示了这两个极端之间的中间点: 组成模型可回溯神经模块网络, 能够进行连锁逻辑推理。 这个模型首先在上下文中找到相关的句子, 然后用神经模块把它们连在一起。 我们的模型在ROPES上显著地改进了性能( 当与重置器混合在一起时, 相对差错减少29 ⁇ ), 这是最近引入的复杂推理数据集 。