Rule-based classification models described in the language of logic directly predict boolean values, rather than modeling a probability and translating it into a prediction as done in statistical models. The vast majority of existing uncertainty quantification approaches rely on models providing continuous output not available to rule-based models. In this work, we propose an uncertainty quantification framework in the form of a meta-model that takes any binary classifier with binary output as a black box and estimates the prediction accuracy of that base model at a given input along with a level of confidence on that estimation. The confidence is based on how well that input region is explored and is designed to work in any OOD scenario. We demonstrate the usefulness of this uncertainty model by building an abstaining classifier powered by it and observing its performance in various scenarios.
翻译:在这项工作中,我们提出了一个基于规则的分类模型,其形式为以逻辑语言直接预测布林值,而不是象统计模型那样模拟概率并将其转化为预测。现有的不确定性量化方法的绝大多数依靠的是提供基于规则的模型所不具备的连续产出的模型。在这项工作中,我们提出了一个不确定性量化框架,其形式为将任何具有二元输出的二元分类器作为黑盒,在给定的投入中估计该基模型的预测准确性,同时对这种估计抱有一定的信心。这种信心基于投入区域的探索和设计在任何OOOD情景中运作的程度。我们通过建立一个由它驱动的不使用分类器并观察其各种情景的性能,来证明这种不确定性模型的效用。