We investigate graph representation learning approaches that enable models to generalize across graphs: given a model trained using the representations from one graph, our goal is to apply inference using those same model parameters when given representations computed over a new graph, unseen during model training, with minimal degradation in inference accuracy. This is in contrast to the more common task of doing inference on the unseen nodes of the same graph. We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features that can be used by a variety of tasks. The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches while maintaining computational efficiency.
翻译:我们调查图形代表学习方法,使模型能够对各图进行概括化:鉴于一个模型使用一个图表的表示方式受过训练,我们的目标是在模型培训期间看不见的、在模型培训期间无法见的新图表上进行计算时,采用同样的模型参数进行推论,并尽可能降低推论的精确度。这与在同一图的未知节点上进行推论这一更为常见的任务形成对照。我们表明,使用随机预测来估计过渡矩阵的多重功率,可以使我们建立一套可供各种任务使用的无形态异同特性。由此得出的特征可用于恢复关于节点的当地周边信息,以便能够在保持计算效率的同时,在与其他方法相关联的情况下进行推论。