Coupled with biaffine decoders, transformers have been effectively adapted to text-to-graph transduction and achieved state-of-the-art performance on AMR parsing. Many prior works, however, rely on the biaffine decoder for either or both arc and label predictions although most features used by the decoder may be learned by the transformer already. This paper presents a novel approach to AMR parsing by combining heterogeneous data (tokens, concepts, labels) as one input to a transformer to learn attention, and use only attention matrices from the transformer to predict all elements in AMR graphs (concepts, arcs, labels). Although our models use significantly fewer parameters than the previous state-of-the-art graph parser, they show similar or better accuracy on AMR 2.0 and 3.0.
翻译:变压器与双倍式解码器相结合,有效地适应了文本到文字转换,并实现了AMR解析的最新性能。然而,许多先前的工程都依靠双亚相解码器进行电弧和标签预测,尽管变压器可能已经学会了解码器使用的大多数特征。本文展示了一种新颖的方法,通过将混杂数据(石、概念、标签)合并为变压器的一种输入,以吸引人们的注意,并且只使用变压器的注意矩阵来预测AMR图中的所有元素(概念、弧、标签 ) 。 尽管我们的模型使用的参数比以前的最先进的图解剖器要少得多,但它们在 AMR 2. 0 和 3. 0 上显示出相似或更精确的精确度。