Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods. These approaches linearise the input graph to be fed to a recurrent neural network. In this paper, we propose an alternative encoder based on graph convolutional networks that directly exploits the input structure. We report results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure.
翻译:从图形结构化数据生成神经文字的以往大部分工作都依赖于标准序列到序列方法。 这些方法使输入图线直线化, 以输入到经常性神经网络。 在本文中, 我们建议基于直接利用输入结构的图形革命网络的替代编码器。 我们报告两个图形到序列数据集的结果, 以实验方式显示输入图结构明确编码的好处 。