We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look ``suspicious'' to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose Cincer, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that -- according to the model -- is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model' suspicion, and highly influential, so to convey as much information as possible if relabeled. Cincer achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps acquiring substantially better data and models, especially when paired with our FIM approximation.
翻译:在应用中,我们处理在标签噪音下进行顺序学习的问题,在应用中,可以询问一名人类主管来给可疑实例重新贴标签。现有的方法存在缺陷,因为它们只是将出现的例子重新贴上标签,这些例子看起来“吉”到模型。因此,这些错误贴上标签的清洁步骤最终会污染培训数据和模型,不再有再清洗的可能。我们建议Cincer,这是一种新颖的方法,通过识别相不相容的例子来清洁新的和过去的数据。每当发现一个可疑的例子,Cincer在培训中发现一个反实例,根据模型,它们与可疑的例子格格格格格格不入,要求警告员重新贴上一个或两个例子的标签,解决这种可能的不一致之处。反标签被选为格格格不入,从而解释模型的怀疑,并且具有很高的影响力,以便尽可能多地传递信息。Cincerer通过利用一个高效和稳健的近比法以渔业信息矩阵(FIM)为基础的影响功能(FIM)实现这一点。我们广泛的实证评估显示,在更精确的模型背后的原因后,我们更能更清楚地了解。