From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.
翻译:仅从正(P)和无标签(U)数据中,二进制分类器可以接受PU学习培训,其中艺术状态是不带偏见的PU学习。然而,如果其模型非常灵活,培训数据的经验风险就会变成负数,我们就会遭受严重过度。 在本文中,我们建议为PU学习建立一个非负数风险估计器:当最小化时,它更能防止超配,因此,我们可以使用非常灵活的模型(如深神经网络),因为P数据有限。 此外,我们分析了拟议风险估计器的偏差、一致性和中度偏差减少,并约束了由此产生的实验风险最小化实验器的估计错误。 实验表明,我们的风险估计器能够解决其不偏倚的对应方的问题。