State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts. On the other hand, several recent classifiers with favorable out-of-distribution (OOD) robustness properties have emerged, achieving high accuracy on their target tasks while maintaining their in-distribution accuracy on challenging benchmarks. We present a meta-analysis on a wide range of publicly released models, most of which have been published over the last twelve months. Through this meta-analysis, we empirically identify four main commonalities for all the best-performing OOD-robust models, all of which illuminate the considerable promise of vision-language pre-training.
翻译:在大规模数据集(如图象网)方面受过培训的最先进的图像分类人员已证明很容易受到一系列有意和附带的分布变化的影响,另一方面,最近出现了一些具有有利于分配的稳健性能的分类人员,在目标任务上实现了很高的准确性,同时在具有挑战性的基准方面保持了在分布上的准确性。我们对大量公开公布的模型进行了元分析,其中多数是在过去12个月中公布的。通过这一元分析,我们从经验上确定了所有业绩最佳的OOD-Robust模型的四种主要共性,所有这些模型都揭示了前期培训的愿景。