We need billion-scale images to achieve more generalizable and ground-breaking vision models, as well as massive dataset storage to ship the images (e.g., the LAION-4B dataset needs 240TB storage space). However, it has become challenging to deal with unlimited dataset storage with limited storage infrastructure. A number of storage-efficient training methods have been proposed to tackle the problem, but they are rarely scalable or suffer from severe damage to performance. In this paper, we propose a storage-efficient training strategy for vision classifiers for large-scale datasets (e.g., ImageNet) that only uses 1024 tokens per instance without using the raw level pixels; our token storage only needs <1% of the original JPEG-compressed raw pixels. We also propose token augmentations and a Stem-adaptor module to make our approach able to use the same architecture as pixel-based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings. Our experimental results on ImageNet-1k show that our method significantly outperforms other storage-efficient training methods with a large gap. We further show the effectiveness of our method in other practical scenarios, storage-efficient pre-training, and continual learning. Code is available at https://github.com/naver-ai/seit
翻译:我们需要十亿级图像来实现更具有普适性和开创性的视觉模型,同时还需要大量的数据集存储来传输这些图像(例如LAION-4B数据集需要240TB的存储空间)。然而,使用有限的存储基础设施处理无限的数据集存储已经变得越来越具有挑战性。已经提出了许多存储高效的训练方法来解决这个问题,但它们很少具有可扩展性或遭受严重的性能损失。在本文中,我们提出了一个适用于大规模数据集(例如ImageNet)的视觉分类器的存储高效训练策略,该策略每个实例仅使用1024个令牌,而不使用原始级别像素;我们的标记存储仅需要<1%的原始JPEG压缩的原始像素。我们还提出了令牌增强和一个干细胞适配器模块,使我们的方法能够使用与基于像素的方法相同的体系结构,并且仅需要对干细胞层和精心调整的优化设置进行最小的修改。我们在ImageNet-1k上的实验结果表明,我们的方法显着优于其他存储高效训练方法,存在较大的差距。我们进一步展示了我们的方法在其他实际场景,存储高效预训练和持续学习中的有效性。代码可在 https://github.com/naver-ai/seit 找到。