We introduce a general method for learning representations that are equivariant to symmetries of data. Our central idea is to decompose the latent space in an invariant factor and the symmetry group itself. The components semantically correspond to intrinsic data classes and poses respectively. The learner is self-supervised and infers these semantics based on relative symmetry information. The approach is motivated by theoretical results from group theory and guarantees representations that are lossless, interpretable and disentangled. We provide an empirical investigation via experiments involving datasets with a variety of symmetries. Results show that our representations capture the geometry of data and outperform other equivariant representation learning frameworks.
翻译:我们引入了一种一般的学习表达方法,该方法与数据对称不相容。 我们的中心思想是将潜在空间分解成一个变量因素和对称组本身。 组件在语义上分别对应内在数据类别和构成。 学习者以相对对称信息为基础,自我监督并推断出这些语义。 这种方法的动机是团体理论的理论结果,保证了无损、可解释和分解的表述。 我们通过涉及各种对称数据集的实验,提供了实证性调查。 结果显示,我们的表达方式反映了数据的几何性,并超越了其他等同表达学习框架。