Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research. Recent works on mesh processing have concentrated on various kinds of natural symmetries, including translations, rotations, scaling, node permutations, and gauge transformations. To date, no existing architecture is equivariant to all of these transformations. In this paper, we present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above. Our pipeline relies on the use of relative tangential features: a simple, effective, equivariance-friendly alternative to raw node positions as inputs. Experiments on the FAUST and TOSCA datasets confirm that our proposed architecture achieves improved performance on these benchmarks and is indeed equivariant, and therefore robust, to a wide variety of local/global transformations.
翻译:在深层学习研究中,最近关于网状处理的工作集中于各种自然对称,包括翻译、旋转、缩放、节点变异和测量变异。迄今为止,没有任何现有结构对所有这些变异都具有等同性。在本文中,我们提出了一个网状数据的基于关注的结构,这种结构对于上述所有变异都具有可辨别的等同性。我们的管道依赖于使用相对相近的特征:一种简单、有效、对等、友好的原始节点位置作为投入的替代物。对FAust和TOSCAS数据集的实验证实,我们提议的架构在这些基准上取得了更好的业绩,而且确实具有等同性,因此对广泛的地方/全球变异性也具有活力。