We propose a new Reject Option Classification technique to identify and remove regions of uncertainty in the decision space for a given neural classifier and dataset. Such existing formulations employ a learned rejection (remove)/selection (keep) function and require either a known cost for rejecting examples or strong constraints on the accuracy or coverage of the selected examples. We consider an alternative formulation by instead analyzing the complementary reject region and employing a validation set to learn per-class softmax thresholds. The goal is to maximize the accuracy of the selected examples subject to a natural randomness allowance on the rejected examples (rejecting more incorrect than correct predictions). We provide results showing the benefits of the proposed method over na\"ively thresholding calibrated/uncalibrated softmax scores with 2-D points, imagery, and text classification datasets using state-of-the-art pretrained models. Source code is available at https://github.com/osu-cvl/learning-idk.
翻译:我们建议采用新的拒绝选择分类技术,以查明并消除某一神经分类器和数据集决策空间中的不确定区域。这种现有配方采用一种学习式拒绝(remove)/选择(keep)功能,要求拒绝实例的成本为已知,或对选定实例的准确性或覆盖面有严格的限制。我们考虑另一种配方方法,办法是分析互补拒绝区域,并使用一套校准标准来学习每类软负阈值。目标是尽量提高选定实例的准确性,但需根据被否定实例的自然随机性允许(放弃的错误多于正确的预测)。我们提供了结果,表明拟议方法优于“经校准/未校准软式分数”的标定值、2D点、图像和文本分类数据集,使用最先进的预先培训模型。源代码见https://github.com/osu-cvl/learning-idk。