Multi-label learning (MLL) learns from the examples each associated with multiple labels simultaneously, where the high cost of annotating all relevant labels for each training example is challenging for real-world applications. To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label and show that one can successfully learn a theoretically grounded multi-label classifier for the problem. In this paper, a novel SPMLL method named {\proposed}, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed. Specifically, an unbiased risk estimator is derived, which could be guaranteed to approximately converge to the optimal risk minimizer of fully supervised learning and shows that one positive label of each instance is sufficient to train the predictive model. Then, the corresponding empirical risk estimator is established via recovering the latent soft label as a label enhancement process, where the posterior density of the latent soft labels is approximate to the variational Beta density parameterized by an inference model. Experiments on benchmark datasets validate the effectiveness of the proposed method.
翻译:多标签学习( MLL) 学习与多个标签同时相关的例子, 其中每个培训示例说明所有相关标签的高昂成本对于现实世界应用来说是挑战性的。 为了应对挑战, 我们调查单阳性多标签学习( SPML), 其中每个示例仅附加一个相关标签注解, 并显示人们能够成功学习一个基于理论的多标签问题分类器。 在本文中, 提议了一个名为“ 提议” 的新型 SPMLLL 方法, 即“ 单阳性 MultI- 标签学习 Label Estruction ” 。 具体地说, 得出了一个不带偏见的风险估计符, 可以保证与充分监督学习的最佳风险最小化器大致趋同, 并显示每个实例有一个正面的标签足以培训预测模型。 然后, 通过恢复潜在的软标签作为强化标签的过程, 将潜在软标签的后表密度与一个推模型所测的变异的Beta密度参数相近。 基准数据集实验可以验证拟议方法的有效性 。