Most graph-to-text works are built on the encoder-decoder framework with cross-attention mechanism. Recent studies have shown that explicitly modeling the input graph structure can significantly improve the performance. However, the vanilla structural encoder cannot capture all specialized information in a single forward pass for all decoding steps, resulting in inaccurate semantic representations. Meanwhile, the input graph is flatted as an unordered sequence in the cross attention, ignoring the original graph structure. As a result, the obtained input graph context vector in the decoder may be flawed. To address these issues, we propose a Structure-Aware Cross-Attention (SACA) mechanism to re-encode the input graph representation conditioning on the newly generated context at each decoding step in a structure aware manner. We further adapt SACA and introduce its variant Dynamic Graph Pruning (DGP) mechanism to dynamically drop irrelevant nodes in the decoding process. We achieve new state-of-the-art results on two graph-to-text datasets, LDC2020T02 and ENT-DESC, with only minor increase on computational cost.
翻译:多数图形到文字的工程都建在带有交叉注意机制的编码器解码器-解码器框架上。最近的研究显示,对输入图结构进行明确的建模可以显著改善性能。然而,香草结构编码器无法在一个前方传票中为所有解码步骤收集所有专门信息,导致语义表达不准确。与此同时,输入图被平整为交叉注意中未顺序排列的顺序,忽略原始图形结构。因此,在解码器中获取的输入图背景矢量可能存在缺陷。为了解决这些问题,我们提议了一个结构-软件交叉注意(SACA)机制,以便在每个新生成的解码步骤中重新编码输入图示,以了解结构的方式。我们进一步调整了SACA,并引入了它的变式动态图解码机制,以动态地降低解码过程中无关的节点。我们在两个图形到文本数据集(LDC220和ENT-DESC)上取得了新的艺术结果,只有少量的计算成本增长。