We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs). While most works in the literature that use GANs to generate out-of-distribution (OoD) examples only focus on the evaluation of OoD detection, we present a GAN based approach to learn a classifier that produces proper uncertainties for OoD examples as well as for false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples which is state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, in particular on CIFAR10, CIFAR100 and Tiny ImageNet, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we also find that the generated GAN examples do not significantly affect the calibration error of our classifier and result in a significant gain in model accuracy.
翻译:我们提出一种方法,根据基因对抗网络(GANs)对图像分类中深神经网络的偏差和偏差不确定性进行量化。 虽然大多数文献中使用GANs生成分布外(OoD)的实例只侧重于对OOOD检测的评估,但我们提出了一种基于GAN的方法,以学习一个为OOOD示例和假阳性(FPs)产生适当不确定性的分类器。我们不是用GAN生成的最先进的OOOD实例来保护整个分布数据,而是用条件性GAN生成的类外示例来单独保护每个类,用一等以外的示例来补充一等全图像分类器。在我们的实验中,特别是在CIFAR10、CIFAR100和Tiny图像Net的实验中,我们改进了基于GAN培训的州级分类器的OOD检测和FP检测性能。此外,我们还发现,生成的GAN示例并没有严重影响我们分类师的校准错误,结果在模型准确性中获得重大收益。