ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which contains more pictures and classes, is used less frequently for pretraining, mainly due to its complexity, and underestimation of its added value compared to standard ImageNet-1K pretraining. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. Via a dedicated preprocessing stage, utilizing WordNet hierarchies, and a novel training scheme called semantic softmax, we show that various models, including small mobile-oriented models, significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT. Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset. The training code and pretrained models are available at: https://github.com/Alibaba-MIIL/ImageNet21K
翻译:图像Net-1K 作为计算机视觉任务深层学习模型预培训前的主要数据集。 图像Net- 21K数据集包含更多的图片和课程,在预培训中使用的频率较低,主要是因为其复杂性,而且与标准的图像Net-1K预培训相比,其附加值估计过低。 本文旨在缩小这一差距,为每个人提供图像网络- 21K 的高质量高效预培训。 通过一个专门的预处理阶段,利用WordNet高等级系统,以及一个叫作语义软体的新型培训计划,我们展示了各种模型,包括小型移动导向型模型,大大受益于图像Net- 21K在众多数据集和任务方面的预培训。 我们还显示,我们比以前为ViT等著名新模型制定的图像网络- 21K预培训计划要高得多。 我们提议的预培训管道高效、无障碍,并导致从公开的数据集中产生可复制的结果。 培训代码和预培训模型见: https://github.com/Aliba-MIL/ImageNet21K。