Contrastive learning has demonstrated great capability to learn representations without annotations, even outperforming supervised baselines. However, it still lacks important properties useful for real-world application, one of which is uncertainty. In this paper, we propose a simple way to generate uncertainty scores for many contrastive methods by re-purposing temperature, a mysterious hyperparameter used for scaling. By observing that temperature controls how sensitive the objective is to specific embedding locations, we aim to learn temperature as an input-dependent variable, treating it as a measure of embedding confidence. We call this approach "Temperature as Uncertainty", or TaU. Through experiments, we demonstrate that TaU is useful for out-of-distribution detection, while remaining competitive with benchmarks on linear evaluation. Moreover, we show that TaU can be learned on top of pretrained models, enabling uncertainty scores to be generated post-hoc with popular off-the-shelf models. In summary, TaU is a simple yet versatile method for generating uncertainties for contrastive learning. Open source code can be found at: https://github.com/mhw32/temperature-as-uncertainty-public.
翻译:对比性学习表明,我们非常有能力学习没有说明的表达方式,甚至表现优异的监督基线。 但是,它仍然缺乏对现实世界应用有用的重要属性, 其中之一是不确定性。 在本文中,我们提出了一个简单的方法,通过重新定位温度来产生许多对比性方法的不确定性评分,这是用于缩放的神秘的超参数。通过观察温度控制目标对特定嵌入地点的敏感度,我们的目标是将温度作为一种依赖输入的变量来学习,将其作为嵌入信心的一种衡量标准。我们称之为“不确定性”或TaU。我们通过实验,证明TaU对分配以外的检测有用,同时在线性评估的基准上保持竞争力。此外,我们证明TaU可以在预先培训的模式之外学习,从而能够以流行的离子模型生成后热值计。 总之,TaU是生成不确定性的一种简单但又灵活的方法,用于生成对比性学习的不确定性。 公开源代码可见于 https://github.com/mw32/tempter-uncertystal-stalpublic。