Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese. \footnote{Our code and data is available at \url{https://github.com/yiyunya/Textual_CL_MWP}
翻译:解决数学词问题的任务是分析数量关系,并需要准确理解背景自然语言信息。最近的研究表明,当前模型依靠浅脂质来预测解决方案,很容易被小文字扰动误导。为解决这一问题,我们提议了一个文本强化对比学习框架,在运用不同数学逻辑的同时,强制使用模型来区分语义相似的例子。我们采取了一种自我监督的方式战略,通过文字重新排序或问题再构建来丰富具有微妙文字差异的示例。然后我们从方程式和文字视角中取回最难区分的样本,并指导模型学习其表达方式。实验结果显示,我们的方法既在广泛使用的基准数据集方面达到了最新水平,也在精细设计的中、英、中文挑战数据集方面达到了最新水平。\footte{我们的代码和数据可在\url{https://github.com/yyyyyunya/Textual_CL_MWP}