We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs). While most works in the literature that use GANs to generate out-of-distribution (OoD) examples only focus on the evaluation of OoD detection, we present a GAN based approach to learn a classifier that produces proper uncertainties for OoD examples as well as for false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples which is state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, in particular on CIFAR10 and CIFAR100, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we also find that the generated GAN examples do not significantly affect the calibration error of our classifier and result in a significant gain in model accuracy.
翻译:我们提出一种方法,根据基因对抗网络(GANs)对图像分类中的深神经网络进行分解和缩写不确定性的量化。虽然大多数文献中使用GANs生成分流(OoD)实例的文献只侧重于对OOOD检测的评估,但我们提出了一种基于GAN的方法,以学习一个为OOD示例和假阳性(FPs)产生适当不确定性的分类器。我们没有用GAN生成的最先进的OOOD示例来保护整个分布数据,而是用一个条件性GAN生成的类外示例来单独保护每个类,并用一个一五种全图像分类器作为补充。在我们的实验中,特别是CIFAR10和CIFAR100实验中,我们改进了基于GAN培训的高级分类器的OOD检测和FP检测性能。此外,我们还发现,产生的GAN示例并没有显著地影响我们分类的校准误差,并导致模型精度的显著提高。