Graph-structured data arise in many scenarios. A fundamental problem is to quantify the similarities of graphs for tasks such as classification. R-convolution graph kernels are positive-semidefinite functions that decompose graphs into substructures and compare them. One problem in the effective implementation of this idea is that the substructures are not independent, which leads to high-dimensional feature space. In addition, graph kernels cannot capture the high-order complex interactions between vertices. To mitigate these two problems, we propose a framework called DeepMap to learn deep representations for graph feature maps. The learned deep representation for a graph is a dense and low-dimensional vector that captures complex high-order interactions in a vertex neighborhood. DeepMap extends Convolutional Neural Networks (CNNs) to arbitrary graphs by generating aligned vertex sequences and building the receptive field for each vertex. We empirically validate DeepMap on various graph classification benchmarks and demonstrate that it achieves state-of-the-art performance.
翻译:图形结构数据在许多设想中出现。 一个根本性的问题是量化诸如分类等任务的图表的相似性。 R- 进化图形内核是将图形分解成子结构并加以比较的正成模函数。 有效执行这一想法的一个问题在于子结构不独立, 从而导致高维特征空间。 此外, 图形内核无法捕捉脊椎之间的高分级复杂相互作用。 为了缓解这两个问题, 我们提议了一个称为深映的框架, 以学习图形地貌图的深度显示。 图表的深映面是一个密集和低维矢量的矢量, 能捕捉到在顶端附近复杂的高顺序相互作用。 深映面将进化神经网络( CNNs) 扩展为任意的图形, 其方法是生成一致的脊椎序列, 为每个脊椎建立可容纳的字段。 我们根据各种图形分类基准对DeepMap进行了实证, 并证明它达到了最先进的性能。