Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques---that convert graphs to real-valued vectors for use with neural networks---are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present Graph2Seq, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq scales naturally to graphs of arbitrary sizes and shapes. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequences. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with Graph2Seq show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems.
翻译:神经网络已被证明是借助图形结构数据学习算法的有效工具。 然而,图形代表技术 — — 将图形转换成实际价值矢量,供神经网络使用 — — 仍处于萌芽阶段。最近的工作提出了几种方法(例如,图形革命网络),但这些方法难以对不同大小和形状的图表进行缩放和概括化。我们展示了图2Seq,这是一种代表图的悬崖的新技术,它代表了无穷时间序列。通过不将图形2Seq 比例表局限于固定尺寸,自然地将图2Seq 比例表转换为任意大小和形状的图表。图2Seq 也可以反转,允许从序列中完全恢复图形结构。通过分析图形代表的正式计算模型,我们表明,一个无界序列对于可缩放性是必要的。我们用图2Seq的实验结果显示,在各种图形组合优化上存在强烈的概括和新的状态性能。