In next-generation communications and networks, machine learning (ML) models are expected to deliver not only accurate predictions but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. This paper studies the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We first establish key theoretical properties of this system's outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only one resource is available, the system's OP equals the model's overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We also demonstrate that post-processing calibration cannot improve the system's minimum achievable OP, as it does not introduce new information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques: Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke's 2D model, which accounts for receiver mobility.
翻译:在下一代通信与网络中,机器学习(ML)模型不仅需要提供准确的预测,还应输出经过良好校准的置信度分数,以反映决策正确的真实可能性。本文研究了单用户多资源分配框架下基于ML的中断预测器的校准性能。我们首先建立了在完美校准条件下该系统中断概率(OP)的关键理论性质。重要的是,我们证明随着资源数量的增加,完美校准预测器的OP趋近于其低于分类阈值时的条件期望输出。相反,当仅有一个可用资源时,系统的OP等于模型的整体期望输出。随后,我们推导了完美校准预测器的OP条件。这些发现指导了分类阈值的选择,以实现期望的OP,帮助系统设计者满足特定的可靠性要求。我们还证明后处理校准无法改善系统可达到的最小OP,因为它并未引入关于未来信道状态的新信息。此外,我们表明经过良好校准的模型属于一个更广泛的预测器类别,这类预测器必然能改善OP。特别地,我们建立了单调性条件,要求准确度-置信度函数必须满足该条件才能实现此类改进。为验证这些理论性质,我们采用后处理校准技术——普拉特缩放和等渗回归——进行了严格的基于仿真的分析。在此框架中,预测器使用专门针对该系统设计的中断损失函数进行训练。此外,该分析在瑞利衰落信道上进行,其中时间相关性由克拉克二维模型刻画,该模型考虑了接收机移动性。