Graph convolutional networks are a popular class of deep neural network algorithms which have shown success in a number of relational learning tasks. Despite their success, graph convolutional networks exhibit a number of peculiar features, including a bias towards learning oversmoothed and homophilic functions, which are not easily diagnosed due to the complex nature of these algorithms. We propose to bridge this gap in understanding by studying the neural tangent kernel of sheaf convolutional networks--a topological generalization of graph convolutional networks. To this end, we derive a parameterization of the neural tangent kernel for sheaf convolutional networks which separates the function into two parts: one driven by a forward diffusion process determined by the graph, and the other determined by the composite effect of nodes' activations on the output layer. This geometrically-focused derivation produces a number of immediate insights which we discuss in detail.
翻译:图形共变网络是一个广受欢迎的深神经网络算法类别,它在许多关系学习任务中表现出成功。 尽管它们取得了成功, 图表共变网络也呈现出一些特殊的特点, 包括偏向于学习过度和同系函数, 由于这些算法的复杂性质,这些功能不容易被诊断出来。 我们提议通过研究Sheaf共变网络的神经切核内核, 以图共变网络的地形学一般化来弥合理解上的这一差距。 为此, 我们为Sheaf共变网络绘制了一个神经正向内核的参数, 将功能分为两个部分: 一个由图表确定的前方扩散过程驱动, 另一个由节点激活对输出层的综合影响决定。 这种以几何为焦点的衍生产生了一些直接的洞察力, 我们详细讨论了这些洞察力。