Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research. Recent works on mesh processing have concentrated on various kinds of natural symmetries, including translations, rotations, scaling, node permutations, and gauge transformations. To date, no existing architecture is equivariant to all of these transformations. Moreover, previous implementations have not always applied these symmetry transformations to the test dataset. This inhibits the ability to determine whether the model attains the claimed equivariance properties. In this paper, we present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above. We carry out experiments on the FAUST and TOSCA datasets, and apply the mentioned symmetries to the test set only. Our results confirm that our proposed architecture is equivariant, and therefore robust, to these local/global transformations.
翻译:在深层学习研究中,最近关于网目处理的工程集中在各种自然对称性上,包括翻译、旋转、缩放、节点变异和测量变异。迄今为止,没有任何现有架构对所有这些变异都具有等式性。此外,以往的实施工作并不总是将这些对称变换应用于测试数据集。这抑制了确定模型是否达到声称的对称变异性的能力。在本文中,我们提出了一个基于注意的网目数据结构,这些数据对于上述所有变异都具有可辨别的等同性。我们在FAust和TOSCAS数据集上进行了实验,并将上述对称的对称只应用于测试集。我们的结果证实,我们提议的架构对于这些本地/全球变异性是等式的,因此是稳健健健的。