A Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces. SNNs have been shown to have useful theoretical properties that help tackle issues arising from heterophily and over-smoothing. One complication intrinsic to these models is finding a good sheaf for the task to be solved. Previous works proposed two diametrically opposed approaches: manually constructing the sheaf based on domain knowledge and learning the sheaf end-to-end using gradient-based methods. However, domain knowledge is often insufficient, while learning a sheaf could lead to overfitting and significant computational overhead. In this work, we propose a novel way of computing sheaves drawing inspiration from Riemannian geometry: we leverage the manifold assumption to compute manifold-and-graph-aware orthogonal maps, which optimally align the tangent spaces of neighbouring data points. We show that this approach achieves promising results with less computational overhead when compared to previous SNN models. Overall, this work provides an interesting connection between algebraic topology and differential geometry, and we hope that it will spark future research in this direction.
翻译:沙夫神经网络(SNN)是一种在树叶上运行的图形神经网络(GNN)类型,它以树叶为主,用其节点和边缘以及这些空格之间的线性地图来配置一个带有矢量空间的图表。 SNN被证明具有有益的理论属性,有助于解决由杂乱和过度移动引起的问题。这些模型的一个内在复杂因素是找到一个好的草图,以便完成任务。以前的作品提出了两种截然相反的方法:根据域知识手工建造树叶,用梯度方法学习树叶端对端。然而,域知识往往不足,而学习树叶可以导致超配和显著的计算顶部。在这项工作中,我们提出了一种新的计算草图的方法,从里曼的测地学中得到灵感:我们利用这一多层假设来计算多层和地貌或多层图,这些图最优化地调相近的数据点空间。我们表明,与先前的SNNP和未来的地貌模型相比,这一方法在减少计算顶部的顶部和顶层空间上将带来令人感兴趣的结果。