This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization. Specifically, Diff-Explainer allows for the fine-tuning of neural representations within a constrained optimization framework to answer and explain multi-hop questions in natural language. To demonstrate the efficacy of the hybrid framework, we combine existing ILP-based solvers for multi-hop Question Answering (QA) with Transformer-based representations. An extensive empirical evaluation on scientific and commonsense QA tasks demonstrates that the integration of explicit constraints in an end-to-end differentiable framework can significantly improve the performance of non-differentiable ILP solvers (8.91% - 13.3%). Moreover, additional analysis reveals that Diff-Explainer is able to achieve strong performance when compared to standalone Transformers and previous multi-hop approaches while still providing structured explanations in support of its predictions.
翻译:本文展示了Diff-Explainer, 即第一个通过可区别的相容优化将明确限制与神经结构相结合的多点希望推断的混合框架。 具体地说, Diff- Explainer 允许在有限的优化框架内对神经表现进行微调, 以自然语言回答和解释多点希望问题。 为了展示混合框架的效力, 我们将基于多点问题解答( QA) 的现有基于 ILP 的解决方案与基于变异器的表达方式结合起来。 对科学和普通QA 任务的广泛经验性评估表明, 将明确限制纳入端到端的不同框架可以显著改善非区别的 ILP 解答( 8. 91% - 13.3% ) 的性能。 此外, 进一步的分析显示, 与独立变换器和以往多点解方法相比, Diff-Explainerenter 仍然能够取得强的绩效, 同时提供结构化的解释来支持其预测。