Self-supervised learning (SSL) is a scalable way to learn general visual representations since it learns without labels. However, large-scale unlabeled datasets in the wild often have long-tailed label distributions, where we know little about the behavior of SSL. In this work, we systematically investigate self-supervised learning under dataset imbalance. First, we find out via extensive experiments that off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations. The performance gap between balanced and imbalanced pre-training with SSL is significantly smaller than the gap with supervised learning, across sample sizes, for both in-domain and, especially, out-of-domain evaluation. Second, towards understanding the robustness of SSL, we hypothesize that SSL learns richer features from frequent data: it may learn label-irrelevant-but-transferable features that help classify the rare classes and downstream tasks. In contrast, supervised learning has no incentive to learn features irrelevant to the labels from frequent examples. We validate this hypothesis with semi-synthetic experiments and theoretical analyses on a simplified setting. Third, inspired by the theoretical insights, we devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets with several evaluation criteria, closing the small gap between balanced and imbalanced datasets with the same number of examples.
翻译:自我监督的学习(SSL)是学习一般视觉表现的一种可伸缩的方式,因为它学会了没有标签的学习。然而,野生大规模无标签的无标签数据集往往有长期的标签分布,我们对SSL的行为知之甚少。在这项工作中,我们系统地调查在数据集失衡下进行自我监督的学习。首先,我们通过广泛的实验发现,现成的自我监督的表达方式比监管的表达方式更强大,比监管的表达方式更能适应阶级不平衡。与SSL相比,平衡和不平衡的预培训前与SSL相比的绩效差距大大小得多,远小于监督的学习(跨抽样规模的)差距,既包括内部学习,又特别包括外部评价。第二,为了了解SSL的稳健性,我们假设SSL从经常的数据中学习更丰富的特征:它可能学习有助于分类稀有类别和下游任务。相比之下,受监督的学习没有动力来学习与标签无关的特征。我们用半合成的学习模型来验证这一假设,即半合成SLSL的测试和理论性分析,通过简化的理论分析来改进数据定型的系统。