Given only positive examples and unlabeled examples (from both positive and negative classes), we might hope nevertheless to estimate an accurate positive-versus-negative classifier. Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE) -- determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning -- given such an estimate, learning the desired positive-versus-negative classifier. Unfortunately, classical methods for both problems break down in high-dimensional settings. Meanwhile, recently proposed heuristics lack theoretical coherence and depend precariously on hyperparameter tuning. In this paper, we propose two simple techniques: Best Bin Estimation (BBE) (for MPE); and Conditional Value Ignoring Risk (CVIR), a simple objective for PU-learning. Both methods dominate previous approaches empirically, and for BBE, we establish formal guarantees that hold whenever we can train a model to cleanly separate out a small subset of positive examples. Our final algorithm (TED)$^n$, alternates between the two procedures, significantly improving both our mixture proportion estimator and classifier
翻译:仅从正面和负向分类的正面例子和未贴标签的例子来看,我们可能希望对正反负分类法作出准确的正反反反反反分类法估计。 形式上,这项任务分为两个子任务:(一) 混合比例估计(MPE) -- -- 确定未贴标签数据中正面例子的分数;(二) PU学习 -- -- 根据这种估计,学习所期望的正反反向分类法。不幸的是,在高维环境中,两种问题都有典型的分解方法。与此同时,最近提出的超常分类法缺乏理论一致性,而且不可靠地依赖超参数调整。在本文中,我们提出了两种简单技术:(一) 最佳宾式比例估计(BBE)(针对MPE);(二) 有条件值忽略风险(CVIR) -- -- 一种简单的PU学习目标。两种方法在经验上都占主导地位,对于BE,我们只要能够训练一种模式来清洁地区分一小组积极例子,我们就会建立正式的保证。我们的最后算法(TE)$和两种方法在两种程序之间大大地改变比例之间,我们的最后算。