We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that under certain conditions our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. When used with Laplacian eigenvectors, our networks are provably more expressive than existing spectral methods on graphs; for instance, they subsume all spectral graph convolutions, certain spectral graph invariants, and previously proposed graph positional encodings as special cases. Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet .
翻译:我们引入了SignNet和BasicNet -- -- 新的神经结构 -- -- 这些新的神经结构,这些神经结构对于由egenvisctors所展示的两大关键对称性是变化的:(一) 符号翻转,因为如果美元是一个egenvictors,那么就是一个egenvictors,那么就是美元就是一美元;(二) 更一般性的对称性,发生在高维的天体中,有无限多的基于源代数的选择。我们证明,在某些条件下,我们的网络是普遍性的,即,它们可以与需要的偏差的天体相近,任何源性功能。当与 Laplacian eigenvectors 一起使用时,我们的网络比在图形上现有的光谱法方法更具有可感知的表达性;例如,它们包含所有的光谱图形相向演进、某些光谱变量以及先前提议的图形定位编码,作为特殊案例。实验显示,我们的网络大大超出分子图形回归、学习直观图形图解图解和在三角中间的神经域域域域域的基线。我们的代码可以在 http://Bs/Basqubqubs。