Self-supervised methods have achieved remarkable success in transfer learning, often achieving the same or better accuracy than supervised pre-training. Most prior work has done so by increasing pre-training computation by adding complex data augmentation, multiple views, or lengthy training schedules. In this work, we investigate a related, but orthogonal question: given a \textit{fixed} FLOP budget, what are the best datasets, models, and (self-)supervised training methods for obtaining high accuracy on representative visual tasks? Given the availability of large datasets, this setting is often more relevant for both academic and industry labs alike. We examine five large-scale datasets (JFT-300M, ALIGN, ImageNet-1K, ImageNet-21K, and COCO) and six pre-training methods (CLIP, DINO, SimCLR, BYOL, Masked Autoencoding, and supervised). In a like-for-like fashion, we characterize their FLOP and CO$_2$ footprints, relative to their accuracy when transferred to a canonical image segmentation task. Our analysis reveals strong disparities in the computational efficiency of pre-training methods and their dependence on dataset quality. In particular, our results call into question the commonly-held assumption that self-supervised methods inherently scale to large, uncurated data. We therefore advocate for (1) paying closer attention to dataset curation and (2) reporting of accuracies in context of the total computational cost.
翻译:自我监督的方法在转移学习方面取得了显著成功, 通常与受监督的培训前培训获得的精确度相同或更好。 大部分先前的工作是通过增加复杂的数据增强、 多重观点或冗长的培训时间表来增加培训前计算。 在这项工作中, 我们调查了一个相关但正反的问题: 给一个相关但正反的问题: 给出了一个 kextit{fixed} FLOP 预算, 哪些是最佳的数据集、 模型和( 自我监督的) 培训方法, 以便在具有代表性的视觉任务中获得高度准确性? 由于有大量数据集, 这一设置往往对学术实验室和工业实验室都更为相关。 我们检查了五个大型数据集( JFT- 300M、 ALIGN、 imageNet-1K、 imageNet- 21K 和 COCO), 以及六个培训前方法( CLIP、 DINO、 SimCLR、 BYOL、 Masked Autencoding 和监督 ) 。 类似地说, 我们将其FLOP 和CO$2 足迹点描述, 相对准确性, 相对而言, 当它们转到非计算机背景图像分析总体图像分析时, 假设的准确性假设的准确性任务时, 。 我们的分析在自我评估性数据分析在自我评估前的自我分析过程中的自我评估过程中的自我评估方法中, 一种明显的自我评估性数据效率方面, 我们的自我评估中, 。