In multi-label learning, the issue of missing labels brings a major challenge. Many methods attempt to recovery missing labels by exploiting low-rank structure of label matrix. However, these methods just utilize global low-rank label structure, ignore both local low-rank label structures and label discriminant information to some extent, leaving room for further performance improvement. In this paper, we develop a simple yet effective discriminant multi-label learning (DM2L) method for multi-label learning with missing labels. Specifically, we impose the low-rank structures on all the predictions of instances from the same labels (local shrinking of rank), and a maximally separated structure (high-rank structure) on the predictions of instances from different labels (global expanding of rank). In this way, these imposed low-rank structures can help modeling both local and global low-rank label structures, while the imposed high-rank structure can help providing more underlying discriminability. Our subsequent theoretical analysis also supports these intuitions. In addition, we provide a nonlinear extension via using kernel trick to enhance DM2L and establish a concave-convex objective to learn these models. Compared to the other methods, our method involves the fewest assumptions and only one hyper-parameter. Even so, extensive experiments show that our method still outperforms the state-of-the-art methods.
翻译:在多标签学习中,缺失标签的问题带来了重大挑战。许多方法试图通过利用低等级标签矩阵结构来恢复缺失标签。然而,这些方法只是利用全球低等级标签结构,在某种程度上忽视当地低等级标签结构和标签差异信息,为进一步改进绩效留有余地。在本文中,我们开发了一个简单而有效的多标签差异学习(DM2L)方法,用于多标签缺失的多标签学习。具体地说,我们把低等级结构强加给同一标签(地方排名缩缩缩)中的所有事件预测,以及从不同标签(全球排名扩大)中预测事件的最大分离结构(高等级结构)。这样,这些强加的低等级结构可以帮助建模本地和全球低等级标签结构,而强加的高等级结构有助于提供更深层次的分歧性。我们随后的理论分析也支持这些直觉。此外,我们通过使用内核游戏的技巧提供非线性扩展的架构(地方排名缩缩缩缩),并在不同标签(全球排名扩大级别)预测中设置一个最大程度分离的结构(高等级结构 ),, 这些强加的低等级结构可以帮助构建本地的模型, 展示我们的其他方法。