Transition-based parsers for Abstract Meaning Representation (AMR) rely on node-to-word alignments. These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints. Parsers also train on a point-estimate of the alignment pipeline, neglecting the uncertainty due to the inherent ambiguity of alignment. In this work we explore two avenues for overcoming these limitations. First, we propose a neural aligner for AMR that learns node-to-word alignments without relying on complex pipelines. We subsequently explore a tighter integration of aligner and parser training by considering a distribution over oracle action sequences arising from aligner uncertainty. Empirical results show this approach leads to more accurate alignments and generalization better from the AMR2.0 to AMR3.0 corpora. We attain a new state-of-the art for gold-only trained models, matching silver-trained performance without the need for beam search on AMR3.0.
翻译:《抽象含义代表(AMR)》的过渡分析师依赖节点对词校正。这些校正与剖析师培训分开学习,需要复杂的基于规则的部件、预处理和后处理管道,以满足特定领域的限制。剖析员还就校准管道进行点估计,忽视了因内在的模糊性而造成的不确定性。在这项工作中,我们探索了克服这些限制的两个途径。首先,我们提议了一位AMR神经校准师,在不依赖复杂管道的情况下学习节点对词校正。我们随后探索了更严格地整合校准师和剖析师培训,为此,我们考虑了校准者不确定性产生的对角动作序列的分布。经验性结果显示,这一方法使得从AMR2.0到AMR3.0公司更精确的校准和概括化。我们为只接受金制培训的模型找到新的技术,在不需要对ARMR3.0进行搜索的情况下匹配银制的性能。