Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean motion invariant/equivariant graph or point cloud neural networks. We introduce Frame Averaging (FA), a general purpose and systematic framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types. Our framework builds on the well known group averaging operator that guarantees invariance or equivariance but is intractable. In contrast, we observe that for many important classes of symmetries, this operator can be replaced with an averaging operator over a small subset of the group elements, called a frame. We show that averaging over a frame guarantees exact invariance or equivariance while often being much simpler to compute than averaging over the entire group. Furthermore, we prove that FA-based models have maximal expressive power in a broad setting and in general preserve the expressive power of their backbone architectures. Using frame averaging, we propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs. We demonstrate the practical effectiveness of FA on several applications including point cloud normal estimation, beyond $2$-WL graph separation, and $n$-body dynamics prediction, achieving state-of-the-art results in all of these benchmarks.
翻译:许多机器学习任务涉及学习功能,已知这些功能与输入数据的某些对称性不易或不均匀。然而,设计在表达和计算效率的同时尊重这些对称性的神经网络结构往往具有挑战性。例如,Euclidean运动异变/异变图形或点云神经网络。我们引入了框架变换(FA)这个通用和系统框架,用于调整已知(后骨)结构,使之成为不易变或异异异于新的对称类型。我们的框架建立在众所周知的集团平均运行器之上,它保证了不易变或等异的正常网络结构。然而,我们发现,对于许多重要的类对称和计算效率的神经网络结构,这个操作器可以被替换为小组元素的平均操作器,称为框架。我们用一个框架来保证准确的变异性或异性,而往往比整个组的均匀化容易得多。此外,我们证明,基于AFA的模型在平价平均平均计算值的硬度基准值中, 展示了各种硬度的硬度的硬度数据网络, 包括E级G级结构的基质结构, 并保存了全局的硬度的硬度结构, 。