Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection. Heteroscedasticity here refers to the fact that the optimal temperature parameter for each sample can be different, as opposed to conventional approaches that use the same value for the entire distribution. To enable this, we propose a new training strategy called anchoring that can estimate appropriate temperature values for each sample, leading to state-of-the-art OOD detection performance across several benchmarks. Using NTK theory, we show that this temperature function estimate is closely linked to the epistemic uncertainty of the classifier, which explains its behavior. In contrast to some of the best-performing OOD detection approaches, our method does not require exposure to additional outlier datasets, custom calibration objectives, or model ensembling. Through empirical studies with different OOD detection settings -- far OOD, near OOD, and semantically coherent OOD - we establish a highly effective OOD detection approach. Code to reproduce our results is available at github.com/LLNL/AMP
翻译:本文中我们的目标是利用热摄氏温度缩放作为分流( OOD) 检测的校准策略。 这里的热摄氏度是指每个样本的最佳温度参数可能不同这一事实, 而不是对整个分布使用相同价值的常规方法。 为了实现这个目标, 我们提议一个新的培训战略, 称为锚定, 可以估计每个样本的适当温度值, 从而导致在多个基准中进行最先进的 OOOD 检测。 我们使用NTK 理论, 表明这一温度函数估计与分类器的缩影不确定性密切相关, 从而解释其行为。 与一些最佳 OOD 检测方法相比, 我们的方法并不要求接触额外的外部数据集、 定制校准目标或模型组合。 我们通过不同 OOD 检测环境的经验研究 -- -- 远OOD, 靠近OOD, 和 语系一致 OOD - 我们建立了一种非常有效的 OOD 检测方法。 复制我们结果的代码可以在 github. com/ LL/ AMPM 上查到。