This paper outlines the use of Transformer networks trained to translate math word problems to equivalent arithmetic expressions in infix, prefix, and postfix notations. We compare results produced by many neural configurations and find that most configurations outperform previously reported approaches on three of four datasets with significant increases in accuracy of over 20 percentage points. The best neural approaches boost accuracy by 30% when compared to the previous state-of-the-art on some datasets.
翻译:本文概述了如何使用经过培训的变换器网络将数学词问题转换成等效的数学表达式( Infix, 前缀和后缀符号 ) 。 我们比较了许多神经配置的结果,发现大多数配置都超过先前报告的四套数据集中三个数据集的方法,其精确度明显提高超过20个百分点。 与某些数据集的先前最新数据相比,最好的神经方法提高了30%的精确度。