The celebrated \emph{Sequence to Sequence learning (Seq2Seq)} technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms baseline systems; using the proposed aggregation strategy, the model can converge rapidly to the optimal performance.
翻译:已知的 \ emph{ 序列序列学习 (Seq2Seq) 技术及其众多变体在很多任务上取得优异性能。 然而, 许多机器学习任务有自然的输入, 自然以图表形式呈现; 现有的 Seq2Seq 模型在实现从图形形式向适当序列的精确转换方面面临着巨大的挑战。 为了应对这一挑战, 我们引入了一个一般端到端的图形到序列神经编码结构, 该结构将输入图表映射到矢量序列中, 并使用基于注意的 LSTM 方法从这些矢量中解码目标序列。 我们的方法首先使用基于图形的改进的神经网络生成节点和图形嵌入; 将精细方向信息纳入节点嵌入新组合战略。 我们进一步引入一个关注机制, 将无线嵌入和解码序列与大图表更好地匹配。 bAbI、 最短路径和自然语言生成的实验结果显示, 我们的模型能够实现状态模型的运行状态, 并大大超出最佳化基线系统; 使用拟议的聚合战略 。