Recent progress in self-supervised learning has demonstrated promising results in multiple visual tasks. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, ignoring the semantic relevance of parts of an image-e.g. a subject vs. a background-which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective "background augmentations", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Through a systematic investigation, we show that background augmentations lead to substantial improvements in performance across a spectrum of state-of-the-art self-supervised methods (MoCo-v2, BYOL, SwAV) on a variety of tasks, e.g. $\sim$+1-2% gains on ImageNet, enabling performance on par with the supervised baseline. Further, we find the improvement in limited-labels settings is even larger (up to 4.2%). Background augmentations also improve robustness to a number of distribution shifts, including natural adversarial examples, ImageNet-9, adversarial attacks, ImageNet-Renditions. We also make progress in completely unsupervised saliency detection, in the process of generating saliency masks used for background augmentations.
翻译:自我监督学习的近期进展在多个视觉任务中显示出了可喜的成果。高性能自我监督方法的一个重要内容是使用培训模型来强化数据,在嵌入空间附近放置不同放大的图像。然而,常用的增强管道以整体方式处理图像,忽视图像图像的某些部分的语义相关性,例如,一个主题与背景可以导致学习虚假关联。我们的工作通过调查一组简单但高效的“后台增强”来解决这一问题,这鼓励模型侧重于与语义相关的内容,阻止它们关注图像背景。我们通过系统调查表明,背景增强导致一系列最先进的自我监督方法(MOCo-v2, BYOL, Swavan)在各种任务(例如,$\sim$+1-2%的图像网络增益,使得在监督基线上能够实现业绩。此外,我们发现,在有限的标签环境环境中的改进了与语言相关的内容,使其不注重图像背景背景背景背景背景背景背景背景。我们通过系统调查发现,包括4.2%的图像升级到4.2%的图像搜索,我们还利用了直观图像的升级的图像升级数据。