Unsupervised approaches for learning representations invariant to common transformations are used quite often for object recognition. Learning invariances makes models more robust and practical to use in real-world scenarios. Since data transformations that do not change the intrinsic properties of the object cause the majority of the complexity in recognition tasks, models that are invariant to these transformations help reduce the amount of training data required. This further increases the model's efficiency and simplifies training. In this paper, we investigate the generalization of invariant representations on out-of-distribution data and try to answer the question: Do model representations invariant to some transformations in a particular seen domain also remain invariant in previously unseen domains? Through extensive experiments, we demonstrate that the invariant model learns unstructured latent representations that are robust to distribution shifts, thus making invariance a desirable property for training in resource-constrained settings.
翻译:无监督学习的鲁棒不变表示方法在物体识别中经常被使用。学习不变性可以使模型更加鲁棒,更加实用于现实场景。由于不改变物体本质特性的数据变换占据了识别任务中的大部分复杂性,因此模型对这些变换不变的能力可以帮助减少所需的训练数据,使模型更加高效、易于训练。在本文中,我们研究不变表示的泛化性能,并试图回答一个问题:模型在某个已知领域中的不变表示,在之前未见过的领域中是否仍能保持不变性?通过广泛的实验,我们证明了不变模型学习到的未形式化的潜在表示具有一定的分布稳健性,因此在资源受限的环境中,不变性是需要训练的一个良好属性。