We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look "suspicious" to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose Cincer, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that -- according to the model -- is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model's suspicion, and highly influential, so to convey as much information as possible if relabeled. Cincer achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps in acquiring substantially better data and models, especially when paired with our FIM approximation.
翻译:在应用中,我们处理在标签噪音下进行顺序学习的问题,在应用中,可以询问一名人类主管来重新标注可疑的例子。现有的方法有缺陷,因为它们只将一些与模型“吉”格格格格格格的输入实例重新标上标签。因此,这些错误标签的清洁步骤最终会玷污培训数据和模型,而不再有再清洗的可能。我们建议Cincer,这是一种新颖的方法,通过识别相互不相容的例子来清理新的和过去的数据。每当发现一个可疑的例子,Cincer在培训中发现一个反例,根据模型,这种反例与可疑的例子格格格格格不入,要求警告员重新标上一个或两个例子,解决这种可能的不一致之处。反标签被选为最不相容的,从而解释模型的猜疑,并且具有很高的影响力,以便尽可能多地传递信息。Cinceral通过利用一个高效和稳健的近似接近以渔业信息矩阵为基础的影响功能(FIM ) 来实现这一目标。我们广泛的实证评估显示,在更精确的模型背后的原因后,我们更清楚地了解了模型。