We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. Moreover, when used with Laplacian eigenvectors, our architectures are provably expressive for graph representation learning: they can approximate any spectral graph convolution, can compute spectral invariants that go beyond message passing neural networks, and can provably simulate previously proposed graph positional encodings. Experiments show the strength of our networks for molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet .
翻译:我们引入了SignNet和BasicNet -- -- 新的神经结构 -- -- 这些新的神经结构对由egenverctors所展示的两个主要的对称性:(一) 符号翻转,因为如果美元是egenvictors,那么美元就是egenvictors,那么就是$-v$;和(二) 更一般性的基础对称性,这些对称性发生在高维的egenspace,有无限众多的基础天体选择。我们证明我们的网络是普遍性的,也就是说,它们可以与理想的变异性相近地接近,使igenvectors的任何连续功能。此外,当与 Laplacecian egenvectors使用时,我们的建筑对图示性表示性表示有可感知的表达性:它们可以接近任何光谱的图象变相,可以对光光谱进行分解,超出信息传递神经网络范围,并且可以对先前提议的图形定位定位调制模。实验显示我们的网络在分子图象回归、学习直观的图解图象表达和在三角中间线上学习神经字段的强度。我们的代码可在 http://Net/Stubath/bus/bath/bath/bus/basiscs.