Conformal prediction is a powerful distribution-free tool for uncertainty quantification, establishing valid prediction intervals with finite-sample guarantees. To produce valid intervals which are also adaptive to the difficulty of each instance, a common approach is to compute normalized nonconformity scores on a separate calibration set. Self-supervised learning has been effectively utilized in many domains to learn general representations for downstream predictors. However, the use of self-supervision beyond model pretraining and representation learning has been largely unexplored. In this work, we investigate how self-supervised pretext tasks can improve the quality of the conformal regressors, specifically by improving the adaptability of conformal intervals. We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores. We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
翻译:共变预测是一种强大的无分配性的不确定性量化工具,它规定了有效的预测间隔,有一定范围的保证。为了产生有效间隔,同时适应每个案例的困难,一个共同的方法是在单独的校准组上计算标准化的不符合同评分。在许多领域,自监学习得到有效利用,以了解下游预测者的一般表现。然而,在示范培训和代议学习之外使用自我监督,基本上尚未探索。在这项工作中,我们调查自监督的托辞任务如何能改善合规反射者的质量,特别是改进符合同间隔的适应性。我们在现有预测模型上培训了一个带有自我监督的托辞任务的辅助模型,并利用自我监督错误作为额外特征来估计不兼容的评分。我们用合成数据和真实数据,用关于效率(宽度)、赤字和超过符合的预测间隔,实证地证明额外信息的好处。