As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.
翻译:作为监督不力学习的一个重要分支,部分标签学习涉及每个实例被指定为一组候选标签的数据,而其中只有一个数据是真实的。尽管有许多关于从部分标签中学习的方法研究,但在相对薄弱的假设下,特别是在理论结果和参数经验选择之间的联系方面,仍然缺乏对其风险一致性特性的理论理解。在本文中,我们提议了一个损失函数组合,名为\ textit{Leveraged Weighted}(LW),首次引入杠杆参数$\beta$,以考虑部分标签和非部分标签损失之间的权衡。从理论方面,我们从部分标签中学习LW损失的风险一致性中得出一个普遍结果,即我们根据这一结果为选择杠杆参数$\beta$提供指导。在实验中,我们核查理论指导,并显示我们拟议的LW损失在基准和真实数据集方面与其他最先进的部分标签学习算法之间的高度效力。