In this paper, we propose a simple yet effective method to deal with the violation of the Closed-World Assumption for a classifier. Previous works tend to apply a threshold either on the classification scores or the loss function to reject the inputs that violate the assumption. However, these methods cannot achieve the low False Positive Ratio (FPR) required in safety applications. The proposed method is a rejection option based on hypothesis testing with probabilistic networks. With probabilistic networks, it is possible to estimate the distribution of outcomes instead of a single output. By utilizing Z-test over the mean and standard deviation for each class, the proposed method can estimate the statistical significance of the network certainty and reject uncertain outputs. The proposed method was experimented on with different configurations of the COCO and CIFAR datasets. The performance of the proposed method is compared with the Softmax Response, which is a known top-performing method. It is shown that the proposed method can achieve a broader range of operation and cover a lower FPR than the alternative.
翻译:在本文中,我们提出了一个简单而有效的方法,用以处理违反封闭世界假设的情况。以前的工作倾向于对分类分数或损失函数适用一个门槛,以拒绝违反这一假设的投入。然而,这些方法不能达到安全应用中所要求的低假正比。拟议方法是一种基于概率网络假设测试的拒绝选项。在概率网络中,可以估计结果的分布情况,而不是单一产出。通过对每类平均和标准偏差进行Z测试,拟议方法可以估计网络的统计重要性,拒绝不确定产出。拟议方法与COCOCO和CIFAR数据集的不同配置进行了实验。拟议方法的性能与Softmax反应作了比较,后者是已知的最高性能方法。可以证明,拟议方法可以实现更广泛的操作范围,涵盖比替代方法更低的FPR。