Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased by acting on the model's output in an informed manner. This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields. Deep Neural Networks are powerful predictors that have recently achieved state-of-the-art performance on a wide spectrum of tasks. Quantifying predictive uncertainty in DNNs is a challenging and yet on-going problem. In this paper we propose a complete framework to capture and quantify three known types of uncertainty in DNNs for the task of image classification. This framework includes an ensemble of CNNs for model uncertainty, a supervised reconstruction auto-encoder to capture distributional uncertainty and using the output of activation functions in the last layer of the network, to capture data uncertainty. Finally we demonstrate the efficiency of our method on popular image datasets for classification.
翻译:对模型预测中的不确定性进行量化十分重要,因为它能够通过以知情方式对模型输出采取行动而提高AI系统的安全性。这对于错误成本高的应用至关重要,例如自主车辆控制、医疗图像分析、财务估算或法律领域。深神经网络是最近就一系列广泛任务取得最新业绩的强大预测者。量化 DNN的预测性不确定性是一个具有挑战性的、但还在持续的问题。在本文件中,我们提出了一个完整框架,用于收集和量化DNNN的三种已知不确定性,以完成图像分类任务。这个框架包括一组CNN模型不确定性,一个有监督的重建自动编码器,以捕捉分布不确定性,并利用网络最后一层的激活功能的输出来捕捉数据不确定性。最后,我们展示了我们在用于分类的流行图像数据集上采用的方法的效率。