Label distribution learning (LDL) is an effective method to predict the label description degree (a.k.a. label distribution) of a sample. However, annotating label distribution (LD) for training samples is extremely costly. So recent studies often first use label enhancement (LE) to generate the estimated label distribution from the logical label and then apply external LDL algorithms on the recovered label distribution to predict the label distribution for unseen samples. But this step-wise manner overlooks the possible connections between LE and LDL. Moreover, the existing LE approaches may assign some description degrees to invalid labels. To solve the above problems, we propose a novel method to learn an LDL model directly from the logical label, which unifies LE and LDL into a joint model, and avoids the drawbacks of the previous LE methods. Extensive experiments on various datasets prove that the proposed approach can construct a reliable LDL model directly from the logical label, and produce more accurate label distribution than the state-of-the-art LE methods.
翻译:标签分布学习 (LDL) 是预测一个样本的标签描述度(a.k.a.标签分布) 的有效方法。 然而, 给培训样本的标签分布(LD) 说明性(LD) 极为昂贵。 因此, 最近的研究往往首先使用标签强化(LE) 来产生逻辑标签的估计分布, 然后在回收标签分布上应用外部LDL算法来预测看不见样本的标签分布。 但是,这种渐进式的方法忽略了 LE 和 LDL 之间的可能联系。 此外, 现有的 LE 方法可能给无效标签指定一些描述度。 为了解决上述问题, 我们提出了一个新颖的方法, 直接从逻辑标签中学习LE和LDL 模式, 将LE 统一成一个联合模型, 避免先前 LE 方法的缺陷。 对各种数据集进行广泛的实验证明, 拟议的方法可以直接从逻辑标签中建立可靠的LDL模式, 并产生比最先进的LE方法更精确的标签分布。</s>