Recently, deep learning models have made great progress in MWP solving on answer accuracy. However, they are uninterpretable since they mainly rely on shallow heuristics to achieve high performance without understanding and reasoning the grounded math logic. To address this issue and make a step towards interpretable MWP solving, we first construct a high-quality MWP dataset named InterMWP which consists of 11,495 MWPs and annotates interpretable logical formulas based on algebraic knowledge as the grounded linguistic logic of each solution equation. Different from existing MWP datasets, our InterMWP benchmark asks for a solver to not only output the solution expressions but also predict the corresponding logical formulas. We further propose a novel approach with logical prompt and interpretation generation, called LogicSolver. For each MWP, our LogicSolver first retrieves some highly-correlated algebraic knowledge and then passes them to the backbone model as prompts to improve the semantic representations of MWPs. With these improved semantic representations, our LogicSolver generates corresponding solution expressions and interpretable knowledge formulas in accord with the generated solution expressions, simultaneously. Experimental results show that our LogicSolver has stronger logical formula-based interpretability than baselines while achieving higher answer accuracy with the help of logical prompts, simultaneously.
翻译:最近,深层次的学习模型在解答答案准确性问题上取得了巨大的进展,然而,这些模型是无法解释的,因为它们主要依靠浅浅的超自然学来取得高性能,而没有理解和推理有根的数学逻辑。为了解决这一问题,并朝着可解释的MWP解决方案迈出一步,我们首先建立一个高质量的MWP数据集,名为InterMWP,由11,495 MWP组成,并点注以代数知识为基础的可解释逻辑公式,作为每个解决方案方程式的根基语言逻辑。与现有的MWP数据集不同,我们的InterMWP基准要求一个解决方案解答器不仅输出解决方案表达方式,而且还预测相应的逻辑公式。为了解决这个问题,我们进一步提出了一种具有逻辑快速和解释生成的新型方法,称为LocoicSolver。对于每个MWP,我们的LogicSolver首先检索一些高度或相关代数的代数知识,然后将它们传递到主干模型,作为改进MWP的语义表达方式的精度表达方式。有了这些改进后的语义表达方式,我们的logicSolverver 产生相应的解决方案的更精确性,同时得到更精确的逻辑解的解算法的解。