Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction [13, 31] and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative predictive intervals (LVD), a simple, efficient, and lightweight method to construct discriminative predictive intervals (PIs) for almost any DL model. With no assumptions on the data distribution, such PIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). We empirically verify, using diverse datasets, that besides being the only locally valid method, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.
翻译:对关键现实应用的深层学习模型建立信任的关键在于高效和理论上可靠的不确定性量化,这一任务仍然具有挑战性。有用的不确定性信息预计将有两个关键属性:有效(保证覆盖面)和歧视性(预期风险高时更不确定),此外,如果与深层学习(DL)方法相结合,它应当可扩展,对DL模型的性能影响极小。多数现有的巴伊西亚方法缺乏经常性的覆盖保障,通常影响模型性能。很少的现有常住方法很少具有歧视性和/或由于不现实的假设而违反了覆盖保障。此外,许多方法费用昂贵,或需要对基础神经网络进行重大修改:它应当有效(保证覆盖面的保障)和具有歧视性(当预期风险很高时,我们提出本地有效和模糊的预测间隔(LV),这是为几乎任何DL模型构建歧视性预测间隔的简单、高效和轻巧的方法。在数据分布的假设中,这种常住用的方法也提供了有限的本地的不确定性,而我们只是使用实地的精确度,我们只是使用比较的准确性数据,并且只是提供实地的准确性,我们只是用比较的准确性方法。