This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample target blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transformers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction.
翻译:本文展示了一种在不依赖手工数据增强下学习高度语义化图像表示的方法。我们介绍了基于图像的联合嵌入预测架构(I-JEPA),这是一种针对自监督学习从图像中学习的非生成方法。I-JEPA的思想很简单:从单个上下文块中,预测同一图像中各种目标块的表示。指导I-JEPA产生语义表示的核心设计选择是遮罩策略;具体来说,重要的是(a)具有足够大规模(语义)的样本目标块,并且要(b)使用信息足够丰富的(空间分布的)上下文块。经验证明,当结合视觉Transformer时,I-JEPA具有很高的可扩展性。例如,我们使用16个A100 GPU在不到72个小时的时间内训练ImageNet上的ViT-Huge/14,以实现从线性分类到物体计算和深度预测的强大下游性能。