A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers that can predict the correct class. Unfortunately, the problem setting only allows a single CL for each example, which notably limits its potential since our labelers may easily identify multiple CLs (MCLs) to one example. In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs. In the first way, we design two wrappers that decompose MCLs into many single CLs, so that we could use any method for learning with CLs. However, the supervision information that MCLs hold is conceptually diluted after decomposition. Thus, in the second way, we derive an unbiased risk estimator; minimizing it processes each set of MCLs as a whole and possesses an estimation error bound. We further improve the second way into minimizing properly chosen upper bounds. Experiments show that the former way works well for learning with MCLs but the latter is even better.
翻译:补充标签( CL) 简单地表示一个不正确的示例类别, 但与 CL 学习的结果是多级分类, 能够预测正确的类别。 不幸的是, 问题设置只允许每个例子使用单一 CL, 明显限制了它的潜力, 因为我们的标签可以很容易地识别多个 CL (MCLs) 到一个例子。 在本文中, 我们提出了一个新颖的问题设置, 允许每个例子的 MLs 和两个与 MLos 学习的方法。 首先, 我们设计了两个包件, 将 MLs 分解成多个单一的 CLs, 这样我们就可以使用任何方法来与 CLs 学习。 但是, MLCs 持有的监督信息在概念上在分解后会被稀释。 因此, 第二, 我们产生一个不带偏见的风险估计符; 尽可能减少它处理每组 MLS 的全部过程, 并且有一定的估计错误。 我们进一步改进第二套方法, 最大限度地减少选择的上限。 实验显示, 前者在与 MLCLs 学习 MLs 方面效果更好, 但后更好。