Deep Learning predictions with measurable confidence are increasingly desirable for real-world problems, especially in high-risk settings. The Conformal Prediction (CP) framework is a versatile solution that automatically guarantees a maximum error rate. However, CP suffers from computational inefficiencies that limit its application to large-scale datasets. In this paper, we propose a novel conformal loss function that approximates the traditionally two-step CP approach in a single step. By evaluating and penalising deviations from the stringent expected CP output distribution, a Deep Learning model may learn the direct relationship between input data and conformal p-values. Our approach achieves significant training time reductions up to 86% compared to Aggregated Conformal Prediction (ACP), an accepted CP approximation variant. In terms of approximate validity and predictive efficiency, we carry out a comprehensive empirical evaluation to show our novel loss function's competitiveness with ACP on the well-established MNIST dataset.
翻译:对现实世界的问题,特别是在高风险环境中,以可测量的信心进行深入学习预测越来越可取。非正式预测(CP)框架是一个多用途解决方案,自动保证最高误差率。然而,CP在计算上效率低下,限制了对大规模数据集的应用。在本文件中,我们建议采用一种新的一致损失功能,在单一步骤中与传统的两步CP方法相近。通过评估和惩罚与严格预期CP产出分布的偏差,深学习模式可以了解输入数据和符合的p价值之间的直接关系。我们的方法与公认的综合孔隙预测(ACP)这一公认的近似变体相比,实现了高达86%的培训时间的重大减少。在大约有效性和预测效率方面,我们进行了全面的经验评估,以显示我们与非加太公司在既定的MNIST数据集方面的竞争力。