ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which is bigger and more diverse, is used less frequently for pretraining, mainly due to its complexity, low accessibility, and underestimation of its added value. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. Via a dedicated preprocessing stage, utilization of WordNet hierarchical structure, and a novel training scheme called semantic softmax, we show that various models significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks, including small mobile-oriented models. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT and Mixer. Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset. The training code and pretrained models are available at: https://github.com/Alibaba-MIIL/ImageNet21K
翻译:图像Net-1K是计算机视觉任务深层学习模型培训前的主要数据集。 图像Net- 21K数据集规模更大、种类更多,在培训前使用较少,主要是因为其复杂性、可访问性低和对附加价值的低估。 本文旨在缩小这一差距,并使人人都能获得关于图像Net-21K的高质量高效预培训。 通过一个专门的预处理阶段,利用WordNet的等级结构,以及一个叫作语义软体的新型培训计划,我们显示,各种模型大大受益于图像Net-21K关于众多数据集和任务的预培训,包括小型移动型模型。我们还显示,我们比以前针对ViT和Mixer等著名新模型的图像网络-21K预培训计划更完美。我们提议的预培训管道高效、可访问,并导致从一个公开的数据集获得SoTA的可复制结果。培训代码和预培训模式见于: https://github.com/Aliba-MIL/ImageNet21K。