Can we learn a multi-class classifier from only data of a single class? We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i.e., the class-posterior probabilities for all the classes) are available. Specifically, we propose an empirical risk minimization framework that is loss-/model-/optimizer-independent. Instead of constructing a boundary between the given class and other classes, our method can conduct discriminative classification between all the classes even if no data from the other classes are provided. We further theoretically and experimentally show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy. Then, we provide an extension of our method for the case where data from a subset of all the classes are available. Experimental results demonstrate the effectiveness of our methods.
翻译:我们能否从单类数据中只学习一个多级分类器? 我们显示,如果没有对损失函数、模型和优化器的任何假设,我们就可以成功地从单类数据中学习一个多级分类器,只要有信任(即所有类的等级 -- -- 其它概率),则严格一致的保证。具体地说,我们提议了一个以损失/模型 -- -- /优化 -- -- 独立为主的经验风险最小化框架。我们的方法可以对所有类和其他类进行歧视性分类,即使没有提供其他类的数据。我们进一步从理论上和实验上表明,即使所提供的信任非常吵闹,我们的方法也可以与简单的修改一致。然后,我们为从所有类的子组获得数据的案例提供了我们方法的延伸。实验结果证明了我们方法的有效性。