Deep neural networks are powerful tools for representation learning, but can easily overfit to noisy labels which are prevalent in many real-world scenarios. Generally, noisy supervision could stem from variation among labelers, label corruption by adversaries, etc. To combat such label noises, one popular line of approach is to apply customized weights to the training instances, so that the corrupted examples contribute less to the model learning. However, such learning mechanisms potentially erase important information about the data distribution and therefore yield suboptimal results. To leverage useful information from the corrupted instances, an alternative is the bootstrapping loss, which reconstructs new training targets on-the-fly by incorporating the network's own predictions (i.e., pseudo-labels). In this paper, we propose a more generic learnable loss objective which enables a joint reweighting of instances and labels at once. Specifically, our method dynamically adjusts the per-sample importance weight between the real observed labels and pseudo-labels, where the weights are efficiently determined in a meta process. Compared to the previous instance reweighting methods, our approach concurrently conducts implicit relabeling, and thereby yield substantial improvements with almost no extra cost. Extensive experimental results demonstrated the strengths of our approach over existing methods on multiple natural and medical image benchmark datasets, including CIFAR-10, CIFAR-100, ISIC2019 and Clothing 1M. The code is publicly available at https://github.com/yuyinzhou/L2B.
翻译:深心神经网络是代表学习的有力工具,但可以很容易地取代在许多现实世界情景中普遍存在的吵闹标签。一般而言,噪音监管可能源于标签者之间的差异、对手的标签腐败等等。为了打击这种标签噪音,一种受欢迎的做法是将定制的权重用于培训实例,这样腐败范例对示范学习的贡献较少。然而,这种学习机制有可能抹去关于数据分布的重要信息,从而产生不尽人意的结果。为了利用从腐败事例中获得的有用信息,一种替代办法是靴式损失,通过纳入网络本身的预测(即假标签)来重建新的现场培训目标。在本文中,我们提出了一个更通用的可学习损失目标,以便一次性对实例和标签进行联合加权。具体地说,我们的方法动态地调整了实际观察到的标签和伪标签之间的权重。在元过程中有效确定重量的模拟损失,与以往的比重方法相比,我们的方法同时进行隐含的可理解性实验性成本,同时进行多重的IMFAR(I),并由此展示了我们现有的高额的IMR(I) IM(I) IM) 的基(I) (I) (IRC) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I) (I)