We tackle the problem of Selective Classification where the objective is to achieve the best performance on a predetermined ratio (coverage) of the dataset. Recent state-of-the-art selective methods come with architectural changes either via introducing a separate selection head or an extra abstention logit. In this paper, we challenge the aforementioned methods and confirm that the superior performance of state-of-the-art methods is owed to training a more generalizable classifier rather than their proposed selection mechanisms. We argue that the best performing selection mechanism should instead be rooted in the classifier itself. Our proposed selection strategy uses the classification scores and achieves better results by a significant margin, consistently, across all coverages and all datasets, without any added compute cost. Furthermore, inspired by semi-supervised learning, we propose an entropy-based regularizer that improves the performance of selective classification methods. Our proposed selection mechanism with the proposed entropy-based regularizer achieves new state-of-the-art results.
翻译:我们处理选择性分类问题,其目标是在数据集的预定比例(覆盖)上取得最佳业绩。最近的最先进的选择性方法通过采用单独的选择头或额外弃权日志而带来建筑变革。在本文中,我们质疑上述方法,并确认最先进方法的优异性能应归功于培训一个更通用的分类师,而不是其提议的甄选机制。我们主张最佳的甄选机制应植根于分类者本身。我们提议的甄选战略使用分类分数,并通过一个显著的差值取得更好的结果,在所有覆盖和所有数据集之间,始终如一地,不增加任何计算成本。此外,在半监督学习的启发下,我们提出了一种基于加密法的正规化器,改进选择性分类方法的绩效。我们提议的基于基于加密法的甄选机制将获得新的最新结果。