We introduce a general method for learning representations that are equivariant to symmetries of data. Our central idea is to decompose the latent space into an invariant factor and the symmetry group itself. The components semantically correspond to intrinsic data classes and poses respectively. The learner is trained on a loss encouraging equivariance based on supervision from relative symmetry information. The approach is motivated by theoretical results from group theory and guarantees representations that are lossless, interpretable and disentangled. We provide an empirical investigation via experiments involving datasets with a variety of symmetries. Results show that our representations capture the geometry of data and outperform other equivariant representation learning frameworks.
翻译:我们引入了一种一般的学习表达方法,该方法与数据对称不等。 我们的中心思想是将潜在空间分解成一个变量因素和对称组本身。 组件在语义上分别对应内在数据类别和构成。 学习者接受基于相对对称信息监管的鼓励异差的损失培训。 这种方法的动机是团体理论的理论结果和保证无损失、可解释和分解的表述。 我们通过涉及各种对称数据集的实验提供实证性调查。 结果显示,我们的表达方式反映了数据的几何和优于其他等式代表学习框架。