Recently, quite a few novel neural architectures were derived to solve math word problems by predicting expression trees. These architectures varied from seq2seq models, including encoders leveraging graph relationships combined with tree decoders. These models achieve good performance on various MWPs datasets but perform poorly when applied to an adversarial challenge dataset, SVAMP. We present a novel model MMTM that leverages multi-tasking and multi-decoder during pre-training. It creates variant tasks by deriving labels using pre-order, in-order and post-order traversal of expression trees, and uses task-specific decoders in a multi-tasking framework. We leverage transformer architectures with lower dimensionality and initialize weights from RoBERTa model. MMTM model achieves better mathematical reasoning ability and generalisability, which we demonstrate by outperforming the best state of the art baseline models from Seq2Seq, GTS, and Graph2Tree with a relative improvement of 19.4% on an adversarial challenge dataset SVAMP.
翻译:最近,为通过预测表达式树来解决数学字问题,产生了一些新颖的神经结构。这些结构与随后的2Seq 模型不同,包括以图形为杠杆的编码器与树的解码器。这些模型在各种 MWP 数据集上表现良好,但在应用对抗性挑战数据集 SVAMP 时表现不佳。我们展示了一个新的MMTM 模型,该模型在培训前运用多任务和多解码功能,在培训前运用多种任务和多解码工具。它通过利用表达式树的顺序前、顺序和顺序后跨行来生成标签来创造不同的任务,并在多任务框架中使用特定的任务解码器。我们利用了低维度的变压器结构,并初始化了RoBERTa 模型的重量。 MMTM 模型实现了更好的数学推理能力和可概括性。我们通过在Seq2Seqeq、GTS和Sq2Treet2Te的艺术基线模型的最佳状态,在对抗性数据设置 SVAMP上相对改进了19.4%来证明我们表现最好的基准模型。