The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this paper, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type and number of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similarly to what happens for convolutional masks in traditional convolutional neural networks. We perform an extensive ablation study to investigate the impact of the model hyper-parameters and we show that our model achieves competitive performance on standard graph classification datasets.
翻译:许多现代神经结构核心的革命操作员可以被有效地视为在输入矩阵和过滤器之间运行一个圆点产品。 虽然这很容易适用于图像等数据,但图像可以作为常规的网格在欧clidean空间中体现, 扩大革命操作员在图形上的工作证明更具挑战性, 原因是其结构不正常。 在本文中, 我们提议使用图形内核, 即计算图中内值的内核函数, 将标准革命操作员扩大到图形域。 这使我们能够定义一个不要求计算输入图嵌入的完全结构模型。 我们的建筑允许插入任何类型和数量的图形内核, 并具有额外的好处, 提供某种在培训过程中学到的结构面罩的解释, 类似于传统革命神经网络中的革命面罩。 我们进行了广泛的反动研究, 以调查模型超参数的影响。 我们显示我们的模型在标准图表分类数据设置上取得了竞争性的性能。