In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge. When facing such a challenge, a more practical and cheaper alternative should be Single Positive Multi-Label Learning (SPMLL), where only one positive label needs to be provided per sample. Existing SPMLL methods usually assume unknown labels as negatives, which inevitably introduces false negatives as noisy labels. More seriously, Binary Cross Entropy (BCE) loss is often used for training, which is notoriously not robust to noisy labels. To mitigate this issue, we customize an objective function for SPMLL by pushing only one pair of labels apart each time to prevent the domination of negative labels, which is the main culprit of fitting noisy labels in SPMLL. To further combat such noisy labels, we explore the high-rankness of label matrix, which can also push apart different labels. By directly extending from SPMLL to MLL with full labels, a unified loss applicable to both settings is derived. Experiments on real datasets demonstrate that the proposed loss not only performs more robustly to noisy labels for SPMLL but also works well for full labels. Besides, we empirically discover that high-rankness can mitigate the dramatic performance drop in SPMLL. Most surprisingly, even without any regularization or fine-tuned label correction, only adopting our loss defeats state-of-the-art SPMLL methods on CUB, a dataset that severely lacks labels.
翻译:在多标签学习(MLL)中,由于成本昂贵和知识有限,精确地说明每个显示的物体是极其困难的。在面临这样的挑战时,一个更实际和更便宜的替代方法应该是单一正面多标签学习(SPMLL),每个样本只需提供一个积极的标签。现有的SPMLL方法通常将未知标签视为负面标签,这不可避免地会将错误的负面标签作为吵闹标签。更严重的是,二进制环(BCE)损失常常用于培训,而对于吵闹的标签来说,这种损失是臭名昭著的。为了缓解这一问题,我们定制了SPMLLL(SPLL)的客观功能,每次只推开一对标签以防止负面标签的支配,而这正是在SMLLL(SP)中安装噪音标签的主要罪魁祸根。为了进一步打击这种噪音标签,我们探索标签矩阵的高档,这也可以将不同的标签推开。通过直接从SPMLLLL(B)到有完全校正的MLL(M),一个适用于两个环境的统一损失。为了减轻两种设置。在真实数据设置,我们在真实数据设置上的实验显示,我们提出的损失不仅会不仅会更严重地展示,而且会压压低的标签,而且还会会会会会进行更精确地降低。</s>