In classification with a reject option, the classifier is allowed in uncertain cases to abstain from prediction. The classical cost-based model of a reject option classifier requires the cost of rejection to be defined explicitly. An alternative bounded-improvement model, avoiding the notion of the reject cost, seeks for a classifier with a guaranteed selective risk and maximal cover. We coin a symmetric definition, the bounded-coverage model, which seeks for a classifier with minimal selective risk and guaranteed coverage. We prove that despite their different formulations the three rejection models lead to the same prediction strategy: a Bayes classifier endowed with a randomized Bayes selection function. We define a notion of a proper uncertainty score as a scalar summary of prediction uncertainty sufficient to construct the randomized Bayes selection function. We propose two algorithms to learn the proper uncertainty score from examples for an arbitrary black-box classifier. We prove that both algorithms provide Fisher consistent estimates of the proper uncertainty score and we demonstrate their efficiency on different prediction problems including classification, ordinal regression and structured output classification.
翻译:在有拒绝选项的分类中,允许分类者在不确定的情况下放弃预测。典型的拒绝选项分类法的成本模型要求明确定义拒绝的成本。一个替代的封闭式改进模式,避免拒绝成本的概念,寻求一个有保障选择性风险和最大覆盖的分类师。我们提出了一个对称定义,即封闭覆盖模型,以寻求拥有最低选择性风险和保障覆盖的分类师为对象。我们证明,尽管三种拒绝模式有不同的配方,但导致同样的预测战略:一个配有随机选择贝斯功能的贝斯分类师。我们定义了适当的不确定性评分概念,作为预测不确定性的卡片总结,足以构建随机化海湾选择功能。我们建议两种算法,从任意黑盒分类师的例子中学习适当的不确定性评分。我们证明,两种算法都对正确的不确定性评分做出了一致的估计,我们证明它们在不同的预测问题上的效率,包括分类、或异常回归和结构产出分类。