This paper proposes a method to construct pretext tasks for self-supervised learning on group equivariant neural networks. Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input. Therefore, it is important to construct pretext tasks for self-supervised learning that do not contradict this equivariance. To ensure that training is consistent with the equivariance, we propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss. Equivariant pretext labels use a set of labels on which we can define the transformations that correspond to the input change. Invariant contrastive loss uses a modified contrastive loss that absorbs the effect of transformations on each input. Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed equivariant self-supervised tasks.
翻译:本文建议了一种方法,用于在群体等同神经网络上进行自我监督学习的借口任务。 群体等同神经网络是其结构限于随输入的变换而通勤的模型。 因此,重要的是要为自我监督学习设计不与这种变相相矛盾的借口任务。 为确保培训与变相相一致,我们提出了自我监督任务的两个概念:等同的借口标签和无异的反差损失。 等同的借口标签使用一套标签,我们可以用来界定与输入变化相对应的变换。 异的反差损失使用一种经修改的对比损失,吸收变换对每种输入的影响。 标准图像识别基准的实验表明,等同神经网络利用了拟议的等同的自我监督任务。</s>