Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 "missile" images are mislabeled as their parent class "projectile"), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.
翻译:在数据背景下存在学习,但信任概念通常侧重于模型预测,而不是标签质量。自信学习(CL)是一种替代方法,其重点是标签质量,其依据的原则是运行噪音数据,用概率阈值计以估计噪音,用信心培训范例排列。虽然许多研究独立地发展了这些原则,但在此,我们结合了这些原则,其依据是假设一个等级条件噪音过程,直接估计噪音(given)标签和无干扰(未知)标签之间的联合分布。这导致一个通用的 CL, 其特征化和识别数据集中的标签错误, 其依据的原理是: 浏览噪音数据, 以精确度为基础, 并用概率值计算; 显示 CL的功能超过最近七种竞争性的学习方法, 在 CFAR 数据集中, 奇怪的是, CL 框架与特定的数据模式或模型(例如, 我们使用 CL 来直接估计无误的 MNIST 数据设置和无误(未知的) 标签, 并改进文本数据的准确性分类, 在 亚马逊 级中, 的分类中, 我们用CL 将数据 升级为 。