We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.4% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are identified using confident learning algorithms and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets. Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by just 5%.
翻译:我们发现10个最常用的计算机视觉、自然语言和音频数据集中的10个测试组中的标签错误,随后研究这些标签错误可能影响基准结果的可能性。测试组中的错误数量众多且范围广泛:我们估计10个数据集中平均3.4%的错误,例如,2916个标签错误占图像网络验证数据集的6%。使用自信的学习算法来识别贴标签错误,然后通过众包校验人类的标签错误(逻辑滞后候选人的54%确实被错误地贴上标签 ) 。传统上,机器学习从业者根据测试精确度选择采用哪种模型——我们的调查结果建议谨慎,建议判断正确标签测试组中的模型可能更有用,特别是对于吵闹的真实世界数据集。令人惊讶的是,我们发现低能力模型可能实际上比真实世界数据组中高能力模型有用,而错误的标签数据比例很高。例如,使用校正标签的图像网:ResNet-18 超越ResNet50,如果最初误标的测试示例的流行程度仅增加6-19个。