Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction [13, 32] and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative prediction intervals (LVD), a simple, efficient and lightweight method to construct discriminative prediction intervals (PIs) for almost any DL model. With no assumptions on the data distribution, such PIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). We empirically verify, using diverse datasets, that besides being the only locally valid method for DL, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.


翻译:对关键现实应用的深层学习模型建立信任的关键在于高效和理论上可靠的不确定性量化,这一任务在理论上仍然具有挑战性。有用的不确定性信息预计将具有两个关键属性:有效(保证覆盖面)和歧视性(预期风险高时更不确定),此外,如果与深层学习(DL)方法相结合,它应当可缩放,并影响DL模型的性能;多数现有的巴伊西亚方法缺乏经常性的覆盖保障,通常影响模型性能;很少有常见方法很少具有歧视性和/或由于不现实的假设而违反覆盖保障;此外,许多方法费用昂贵,或需要对基础神经网络进行重大修改;在符合性预测[13、32]的最新进展的基础上,并利用典型的内核回归概念,我们提出本地有效与差异性预测间隔(LVD),这是为几乎所有DL模型构建歧视性预测间隔(PIS)的简单、高效和轻量的方法。在数据分布上没有假设,这种PIAS还提供有限的本地覆盖,同时提供可靠的本地覆盖(我们仅核实了实地预测率,也提供了本地的精确性比标准)。

0
下载
关闭预览

相关内容

100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
《DeepGCNs: Making GCNs Go as Deep as CNNs》
专知会员服务
30+阅读 · 2019年10月17日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
99+阅读 · 2019年10月9日
已删除
inpluslab
8+阅读 · 2019年10月29日
Arxiv
0+阅读 · 2021年11月25日
Arxiv
30+阅读 · 2021年7月7日
Arxiv
5+阅读 · 2019年6月5日
Learning Discriminative Model Prediction for Tracking
Arxiv
7+阅读 · 2018年1月21日
VIP会员
相关VIP内容
相关资讯
已删除
inpluslab
8+阅读 · 2019年10月29日
Top
微信扫码咨询专知VIP会员