In statistical learning, many problem formulations have been proposed so far, such as multi-class learning, complementarily labeled learning, multi-label learning, multi-task learning, which provide theoretical models for various real-world tasks. Although they have been extensively studied, the relationship among them has not been fully investigated. In this work, we focus on a particular problem formulation called Multiple-Instance Learning (MIL), and show that various learning problems including all the problems mentioned above with some of new problems can be reduced to MIL with theoretically guaranteed generalization bounds, where the reductions are established under a new reduction scheme we provide as a by-product. The results imply that the MIL-reduction gives a simplified and unified framework for designing and analyzing algorithms for various learning problems. Moreover, we show that the MIL-reduction framework can be kernelized.
翻译:在统计学习方面,到目前为止已经提出了许多问题的提法,例如多年级学习、贴上标签的学习、多标签的学习、多任务学习等,为各种现实世界的任务提供了理论模型。虽然已经进行了广泛的研究,但两者之间的关系还没有得到充分的调查。在这项工作中,我们把重点放在一个特定的问题提法上,称为“多机构学习”(MIL),并表明包括上述所有问题在内的一些新问题的学习问题可以减少到MIL,在理论上有保证的概括性界限下,根据我们作为副产品提供的新的削减计划确定削减幅度。结果意味着削减MIL为设计和分析各种学习问题的算法提供了一个简化和统一的框架。此外,我们表明MIL的削减框架可以被束紧。