Partial label learning (PLL) is a typical weakly supervised learning framework, where each training instance is associated with a candidate label set, among which only one label is valid. To solve PLL problems, typically methods try to perform disambiguation for candidate sets by either using prior knowledge, such as structure information of training data, or refining model outputs in a self-training manner. Unfortunately, these methods often fail to obtain a favorable performance due to the lack of prior information or unreliable predictions in the early stage of model training. In this paper, we propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD), which aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set. Specifically, to alleviate the negative impact of false positive labels, we re-weight each candidate label based on the meta loss on the validation set. Then, the classifier is trained by minimizing the weighted cross entropy loss. The proposed method can be easily implemented by using various deep networks with the ordinary SGD optimizer. Theoretically, we prove the convergence property of meta objective and derive the estimation error bounds of the proposed method. Extensive experiments on various benchmark datasets and real-world PLL datasets demonstrate that the proposed method can achieve competent performance when compared with the state-of-the-art methods.
翻译:部分标签学习(PLL)是一个典型的监管薄弱的学习框架,每个培训实例都与候选人标签合在一起,其中只有一个标签有效。为了解决PLL问题,典型的方法是试图通过使用先前的知识,例如培训数据的结构信息,或者以自我培训的方式改进模型产出,对候选人组进行脱钩。不幸的是,这些方法往往没有取得有利的性能,因为在模型培训的早期阶段缺乏先前的信息或不可靠的预测。在本文中,我们提议了一个新的框架,用于部分标签学习带有元目标引导的模糊(MGD)的元目标(MGD),目的是通过在小的验证集中找到一个元目标,从候选人标签中恢复地真真真真真假标签。具体地说,为了减轻假正假标签的负面影响,我们根据验证组的元损失对每个候选人标签进行重新加分量。然后,通过将加权的反转盘损失降到最小。在普通SGD优化器中,可以轻松地应用各种深网络来实施拟议的方法。理论上,我们证明元目标的趋近特性,在比较全球基准方法时,可以比较各种基准方法,比较各种基准方法。