The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a novel general end-to-end graph-to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.
翻译:已知的序列到序列学习(Seq2Seq)技术及其众多变体在很多任务上取得优异性能。然而,许多机器学习任务都有自然代表的图表内容;现有的Seq2Seq 模型在实现从图表形式向适当序列的精确转换方面面临着巨大的挑战。为了应对这一挑战,我们引入了一个全新的一般端到端的图形到端的图形到序列神经编码解码模型,该模型映射向矢量序列的输入图,并使用基于注意的 LSTM 方法从这些矢量中解码目标序列。我们的方法首先利用基于图表的改进的神经网络生成节点和图形嵌入式;我们进一步引入一个关注机制,将节点嵌入和解码序列与大图表相匹配,以更好地应对大图表。关于 bAbI、最短路径和自然语言生成任务的实验结果显示,我们的模型实现了最新状态性能,并且大大超越了以图表为基础的神经元化网络、Seqreaq2Srequet 和Sregal-regal degal-degradudealdeal-dealdeal-staldegal-degaldestration 战略;Sreal-degal-stal-stalgilding-stalgal-staldalgalgalgaldaldaldaldaldal-staldaldaldaldaldal-sal-saldaldalgaldaldaldaldaldaldalgalgalgalgaldaldaldaldaldaldaldaldaldaldaldal, 战略,我们,我们模型的模型的模型,我们的模型, 和制成模型,我们的模型的模型的模型,并不不不。。。。。。 和制成,我们立,我们立,我们立,我们立,我们立,我们立,我们制制制,我们用模型,我们制,我们的实验结果和自然和自然和自然和自然和自然和自然和自然和自然和自然和自然和自然和自然语言生成和自然语言生成的实验结果显示,我们立,我们制型战略,我们的实验结果显示的实验,我们立方,