Prior work in semantic parsing has shown that conventional seq2seq models fail at compositional generalization tasks. This limitation led to a resurgence of methods that model alignments between sentences and their corresponding meaning representations, either implicitly through latent variables or explicitly by taking advantage of alignment annotations. We take the second direction and propose TPOL, a two-step approach that first translates input sentences monotonically and then reorders them to obtain the correct output. This is achieved with a modular framework comprising a Translator and a Reorderer component. We test our approach on two popular semantic parsing datasets. Our experiments show that by means of the monotonic translations, TPOL can learn reliable lexico-logical patterns from aligned data, significantly improving compositional generalization both over conventional seq2seq models, as well as over other approaches that exploit gold alignments.
翻译:先前的语义解析工作表明,常规后继2seq模式无法完成拼写通用任务。这一限制导致一些方法的死灰复燃,这些方法通过隐含的隐含变量或明确利用对齐说明来模拟句及其相应含义表达方式。我们采取第二个方向,并提议TPOL,这是一个两步方法,首先将输入句单曲翻译,然后重新排序,以获得正确的输出。这是用一个由笔译员和重新排序器构成的模块框架实现的。我们在两个流行的语义解析数据集上测试了我们的方法。我们的实验表明,通过单调译法,TIPOL可以从对齐数据中学习可靠的词汇学模式,大大改进对常规后继2eq模式的拼写通用,以及利用黄金校正的其他方法。