We present a graph neural network model for solving graph-to-graph learning problems. Most deep learning on graphs considers ``simple'' problems such as graph classification or regressing real-valued graph properties. For such tasks, the main requirement for intermediate representations of the data is to maintain the structure needed for output, i.e., keeping classes separated or maintaining the order indicated by the regressor. However, a number of learning tasks, such as regressing graph-valued output, generative models, or graph autoencoders, aim to predict a graph-structured output. In order to successfully do this, the learned representations need to preserve far more structure. We present a conditional auto-regressive model for graph-to-graph learning and illustrate its representational capabilities via experiments on challenging subgraph predictions from graph algorithmics; as a graph autoencoder for reconstruction and visualization; and on pretraining representations that allow graph classification with limited labeled data.
翻译:我们提出了一个图形神经网络模型,以解决图形到图形学习问题。在图表上,大多数深层次的学习都考虑到“简单的”问题,如图形分类或回归实际价值的图形属性。对于这些任务,中间数据表示的主要要求是保持产出所需的结构,即将各个类别分开或维持倒退者显示的顺序。然而,一些学习任务,如图值产出的递减、基因模型或图形自动编码器,目的是预测图表结构的产出。为了成功做到这一点,所学的表示方式需要保留更多的结构。我们提出了一个有条件的图解到绘图学习自动递减模型,并通过下列实验来说明其代表性能力:图表算法中具有挑战性的子图预测;作为重建和视觉化的图解的自动编码器;以及允许用有限的标签数据进行图形分类的预培训说明。