We algorithmically identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.4% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are found using confident learning and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by 5%. Traditionally, ML practitioners choose which model to deploy based on test accuracy -- our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets.
翻译:我们从逻辑上识别了10个最常用计算机视觉、自然语言和音频数据集测试组中的标签错误,随后又研究了这些标签错误对基准结果产生影响的可能性。测试组中的错误数量众多且范围广泛:我们估计10个数据集中平均3.4%的错误,例如,2916个标签错误占图像网络验证集的6%。通过自信的学习发现贴贴标签错误,然后通过众包校准人性化标签标签(54%的按逻辑排列滞后的候选人确实被错误地贴上标签 ) 。令人惊讶的是,我们发现,低能力模型可能比真实世界数据集中高能力模型对基准结果的影响大得多。例如,我们估算了10个数据集中平均3.4%的错误:例如,2916个标签错误包括了图像网中平均3.4%的错误,而最初贴错标签的测试示例的普及率仅增加6%。在CFAR-10和校正标签上发现:VGG-11比VGG-19的错误测试实例的普及程度高5 %。传统上,ML执行者选择了哪些模型可以正确部署真实的精确度数据测试。