Automatic math word problem solving has attracted growing attention in recent years. The evaluation datasets used by previous works have serious limitations in terms of scale and diversity. In this paper, we release a new large-scale and template-rich math word problem dataset named Ape210K. It consists of 210K Chinese elementary school-level math problems, which is 9 times the size of the largest public dataset Math23K. Each problem contains both the gold answer and the equations needed to derive the answer. Ape210K is also of greater diversity with 56K templates, which is 25 times more than Math23K. Our analysis shows that solving Ape210K requires not only natural language understanding but also commonsense knowledge. We expect Ape210K to be a benchmark for math word problem solving systems. Experiments indicate that state-of-the-art models on the Math23K dataset perform poorly on Ape210K. We propose a copy-augmented and feature-enriched sequence to sequence (seq2seq) model, which outperforms existing models by 3.2% on the Math23K dataset and serves as a strong baseline of the Ape210K dataset. The gap is still significant between human and our baseline model, calling for further research efforts. We make Ape210K dataset publicly available at https://github.com/yuantiku/ape210k
翻译:近些年来,自动数学的数学解答问题引起了越来越多的关注。 以往作品使用的评价数据集在规模和多样性方面有着严重的局限性。 在本文中,我们发布了一个名为 Ape210K 的大型和模板丰富的数学词问题数据集。 它由210K中国小学的数学问题组成, 相当于最大公共数据集 Math23K 的9倍。 每个问题都包含黄金答案和得出答案所需的方程式。 Ape210K 也具有56K模板的更大多样性, 比 Math23K 高出25倍。 我们的分析显示, 解决 Ape210K 不仅需要自然语言理解, 还需要普通的认知。 我们期望 Ape210K 成为数学词解答系统的基准。 实验显示, 数学23KnK数据集上的最新模型在Ape210K 上表现不佳。 我们的数学23Kampe/Sqequ 模型比3.2%的模型要高出3. 212K, 并且作为我们数据库中的重要基线 。