Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and other transformations belonging to an origin-preserving group $G$, such as reflections and rotations. They rely on standard convolutions with $G$-steerable kernels obtained by analytically solving the group-specific equivariance constraint imposed onto the kernel space. As the solution is tailored to a particular group $G$, the implementation of a kernel basis does not generalize to other symmetry transformations, which complicates the development of group equivariant models. We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels. The resulting framework offers a simple and flexible way to implement Steerable CNNs and generalizes to any group $G$ for which a $G$-equivariant MLP can be built. We apply our method to point cloud (ModelNet-40) and molecular data (QM9) and demonstrate a significant improvement in performance compared to standard Steerable CNNs.
翻译:革命性神经网络(CNNs)为建立神经网络提供了一个总体框架,这种网络对于属于原保存组的翻译和其他变异(如反射和旋转等)来说是不可变的,对于属于原保存组的翻译和其他变异(如反射和旋转)来说是不可变的。它们依赖通过分析解决对内核空间施加的集团特定变异限制而获得的与$G的可变内核的标准演化。由于解决方案是针对某个特定组的$G美元,因此内核基础的实施并不概括于其他对称变异,这种变异使组的变异模型发展复杂化。我们提议通过多层透视器(MLPs)使用隐含的线性表示来参数化$G$的可感应变内核内核。由此形成的框架提供了执行可变CNNP的简单和灵活的方法,并概括到任何组的G$G美元,因为可以为此建立一G$Q-quivariant MLP。我们将我们的方法应用于点云(MdelNet-40)和分子数据(QM9),并展示与可变性标准相比的显著的性业绩。