A key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty. In this paper, we propose a novel algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees. Our approach uses Clopper-Pearson confidence intervals for the Binomial distribution in conjunction with the histogram binning approach to calibrated prediction. In addition, we demonstrate how our predicted confidences can be used to enable downstream guarantees in two settings: (i) fast DNN inference, where we demonstrate how to compose a fast but inaccurate DNN with an accurate but slow DNN in a rigorous way to improve performance without sacrificing accuracy, and (ii) safe planning, where we guarantee safety when using a DNN to predict whether a given action is safe based on visual observations. In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.
翻译:在安全关键环境下部署深神经网络(DNN)的一个关键挑战是,需要提供严格的方法来量化其不确定性。在本文件中,我们提出一个新的算法,用于为DNN建立预测的分类信任度,配以可验证的正确性保障。我们的方法是使用Clopper-Pearson信任度间隔,结合直方图的组合方法来对预测进行校准。此外,我们演示如何利用我们的预测信任度来在两个环境中实现下游保障:(一)快速 DNNN推论,在那里我们展示如何制作一个快速但不准确但缓慢的DNNN,以严格的方式改进性能,而不牺牲准确性;以及(二)安全规划,我们在使用DNN时保证使用D来预测某项行动是否基于视觉观察的安全性。我们实验中显示,我们的方法可以用来为最先进的DNN提供保证。