This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample target blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transformers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction.
翻译:本文展示了一种学习高度语义图像表示的方法,不依赖于手工制作的数据增强。我们引入基于图像联合嵌入预测架构(I-JEPA)的非生成式方法进行自我监督学习。I-JEPA 的设计思路很简单:从单个上下文块中,预测同一图像中各种目标块的表示。指导 I-JEPA 产生语义表示的核心设计选择是屏蔽策略。具体而言,重要的是(a)使用具有足够大的规模(语义)的目标块,并且(b)使用足够信息量(空间分布的)的上下文块。在实证方面,当与 Vision Transformers 结合使用时,我们发现 I-JEPA 具有高度可扩展性。例如,我们使用 16 个 A100 GPU,在不到 72 小时的时间内对 ImageNet 上的 ViT-Huge/14 进行训练,以实现广泛的下游任务强大的性能,从线性分类到对象计数和深度预测。