In this work, we observe that many existing self-supervised learning algorithms can be both unified and generalized when seen through the lens of equivariant representations. Specifically, we introduce a general framework we call Homomorphic Self-Supervised Learning, and theoretically show how it may subsume the use of input-augmentations provided an augmentation-homomorphic feature extractor. We validate this theory experimentally for simple augmentations, demonstrate how the framework fails when representational structure is removed, and further empirically explore how the parameters of this framework relate to those of traditional augmentation-based self-supervised learning. We conclude with a discussion of the potential benefits afforded by this new perspective on self-supervised learning.
翻译:在这项工作中,我们观察到,许多现有的自监督学习算法,如果从等式表述的角度来看,可以统一和普及。具体地说,我们引入了一个我们称之为自控自控学习的一般框架,从理论上表明它如何可以包含使用投入增强-异变特征提取器。我们实验性地验证这一理论用于简单的增强功能,表明当代表结构被删除时框架是如何失败的,并进一步从经验上探索这一框架的参数与传统的基于增强的自我监督学习的参数有何关联。我们最后讨论了这种关于自我监督学习的新观点可能带来的好处。