Incorporating symmetries can lead to highly data-efficient and generalizable models by defining equivalence classes of data samples related by transformations. However, characterizing how transformations act on input data is often difficult, limiting the applicability of equivariant models. We propose learning symmetric embedding networks (SENs) that encode an input space (e.g. images), where we do not know the effect of transformations (e.g. rotations), to a feature space that transforms in a known manner under these operations. This network can be trained end-to-end with an equivariant task network to learn an explicitly symmetric representation. We validate this approach in the context of equivariant transition models with 3 distinct forms of symmetry. Our experiments demonstrate that SENs facilitate the application of equivariant networks to data with complex symmetry representations. Moreover, doing so can yield improvements in accuracy and generalization relative to both fully-equivariant and non-equivariant baselines.
翻译:包含的对称性可以通过界定与转换相关的数据样本的等同类别而导致高数据效率和可概括化模型。然而,对输入数据转换如何作用的定性往往很困难,限制了等式模型的适用性。我们建议学习对称嵌入网络(SENs),将输入空间(例如图像)编码成我们不知道变换(例如旋转)效应的输入空间(例如图象),使其变成在这些操作下以已知方式变换的特征空间。这个网络可以经过对等任务网络的培训,以便学习明确的对称代表法。我们用三种不同的对称形式在等式过渡模型中验证这一方法。我们的实验表明,SENs便于将等同性网络应用于具有复杂对称性描述的数据。此外,这样做可以提高准确性和普遍性,相对于完全均匀性和非等式基线。