We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.
翻译:我们为图表到文字的生成展示了Grafor, 一个基于变换器的新版本的编码器- 解码器结构。 我们用我们的新版图表自省, 一个节点的编码依赖于输入图中的所有节点, 不仅仅是直接的邻里, 方便了全球模式的探测。 我们代表了两个节点之间的关系, 作为它们之间最短路径的长度。 渐变器学习了不同关注对象的节点关系, 从而几乎学习了输入图的不同连接观点。 我们用两种流行的图表到文字生成基准, 即 议程 和 WebNLG 来评估Grafer, 在使用比其他方法少得多的参数的同时, 取得了很强的性能 。