ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which contains more pictures and classes, is used less frequently for pretraining, mainly due to its complexity, and underestimation of its added value compared to standard ImageNet-1K pretraining. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. % Via a dedicated preprocessing stage, utilizing WordNet hierarchies, and a novel training scheme called semantic softmax, we show that various models, including small mobile-oriented models, significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT. % Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset. The training code and pretrained models are available at: https://github.com/Alibaba-MIIL/ImageNet21K
翻译:图像Net-1K 作为计算机视觉任务深层学习模型培训前的初级数据集。 图像Net- 21K数据集包含更多的图片和课程,在预培训中使用的频率较低,主要原因是其复杂性,而且与标准的图像Net-1K预培训相比,其附加值估计过低。 本文旨在缩小这一差距,并使每个人都能在图像网络- 21K 上获得高质量的高效预培训。% Via是一个专门的预处理阶段,利用WordNet的等级,以及一个叫作语义软体的新型培训计划。 我们展示了各种模型,包括小型移动导向型模型,大大受益于图像Net-21K关于众多数据集和任务的预培训。 我们还显示,我们比以前为Vitt等突出的新模型实施的图像网络-21K预培训方案要好得多。% 我们提议的预培训管道高效、方便,并导致Sota通过公开的数据集获得可复制的结果。 培训代码和预培训模式可在以下网址查阅: https://github.com/Alibaba- MIL/ ImageNet21K。