Text generation from AMR requires mapping a semantic graph to a string that it annotates. Transformer-based graph encoders, however, poorly capture vertex dependencies that may benefit sequence prediction. To impose order on an encoder, we locally constrain vertex self-attention using a graph's tree decomposition. Instead of forming a full query-key bipartite graph, we restrict attention to vertices in parent, subtree, and same-depth bags of a vertex. This hierarchical context lends both sparsity and structure to vertex state updates. We apply dynamic programming to derive a forest of tree decompositions, choosing the most structurally similar tree to the AMR. Our system outperforms a self-attentive baseline by 1.6 BLEU and 1.8 chrF++.
翻译:从 AMR 生成的文本需要绘制一个语义图到一个字符串。 但是, 以变换器为基础的图形编码器, 抓取的顶部依赖性差, 因而可能有利于序列预测 。 要对编码器进行排序, 我们使用图形树分解, 本地限制顶部自我注意 。 我们不是形成完整的查询键双部分图, 而是将注意力限制在父体、 子树和同一深度的顶端袋中。 这种等级环境使顶部状态更新既宽度又结构。 我们使用动态程序来开发树木分解的森林, 选择与 AMR 结构最相似的树。 我们的系统比自惯基线多1.6 BLEU 和 1. 8 chrF++ 。