In the era of deep learning, loss functions determine the range of tasks available to models and algorithms. To support the application of deep learning in multi-label classification (MLC) tasks, we propose the ZLPR (zero-bounded log-sum-exp \& pairwise rank-based) loss in this paper. Compared to other rank-based losses for MLC, ZLPR can handel problems that the number of target labels is uncertain, which, in this point of view, makes it equally capable with the other two strategies often used in MLC, namely the binary relevance (BR) and the label powerset (LP). Additionally, ZLPR takes the corelation between labels into consideration, which makes it more comprehensive than the BR methods. In terms of computational complexity, ZLPR can compete with the BR methods because its prediction is also label-independent, which makes it take less time and memory than the LP methods. Our experiments demonstrate the effectiveness of ZLPR on multiple benchmark datasets and multiple evaluation metrics. Moreover, we propose the soft version and the corresponding KL-divergency calculation method of ZLPR, which makes it possible to apply some regularization tricks such as label smoothing to enhance the generalization of models.
翻译:在深层次学习的时代,损失功能决定了模型和算法的任务范围。为了支持在多标签分类(MLC)任务中应用深层次学习,我们在此文件中提议 ZLPR 损失。 与刚果解放运动的其他等级损失相比, ZLPR 可能会造成目标标签数量不确定的问题,从这一点看,它与刚果解放运动经常使用的其他两种战略,即二元相关性和标签功率标准(LP)具有同等能力。 此外, ZLPR 考虑标签之间的核心关系,使其比BR方法更为全面。 在计算复杂性方面, ZLPR 可以与BR 方法竞争,因为它的预测也依赖于标签,因此比LP方法花费的时间和记忆要少。 我们的实验表明ZLPR在多个基准数据集和多个评价指标上的有效性。 此外,我们提议了一些软版本和相应的KL-DRGR 模型, 从而将它作为稳定标签的模型的变现方法, 从而将它适用于Slancergnational 的模型, 从而可以将Slancergniquestal Regniquestal 。