Data uncertainty is commonly observed in the images for face recognition (FR). However, deep learning algorithms often make predictions with high confidence even for uncertain or irrelevant inputs. Intuitively, FR algorithms can benefit from both the estimation of uncertainty and the detection of out-of-distribution (OOD) samples. Taking a probabilistic view of the current classification model, the temperature scalar is exactly the scale of uncertainty noise implicitly added in the softmax function. Meanwhile, the uncertainty of images in a dataset should follow a prior distribution. Based on the observation, a unified framework for uncertainty modeling and FR, Random Temperature Scaling (RTS), is proposed to learn a reliable FR algorithm. The benefits of RTS are two-fold. (1) In the training phase, it can adjust the learning strength of clean and noisy samples for stability and accuracy. (2) In the test phase, it can provide a score of confidence to detect uncertain, low-quality and even OOD samples, without training on extra labels. Extensive experiments on FR benchmarks demonstrate that the magnitude of variance in RTS, which serves as an OOD detection metric, is closely related to the uncertainty of the input image. RTS can achieve top performance on both the FR and OOD detection tasks. Moreover, the model trained with RTS can perform robustly on datasets with noise. The proposed module is light-weight and only adds negligible computation cost to the model.
翻译:然而,深层次的学习算法往往以很高的信心作出预测,即使是对不确定或不相关的投入也是如此。直观地说,FR算法可以从不确定性的估计和对分配外抽样的探测两方面获益。从目前的分类模型的概率来看,温度标度恰恰是软式功能中隐含的不确定性噪音的规模。与此同时,数据集中的图像的不确定性应先于先前的分发。根据观察,建议建立一个不确定性建模的统一框架,FR,随机温度缩放(RTS),以学习可靠的FR算法。RTS的好处是双重的。 (1) 在培训阶段,它可以调整清洁和噪音样品的学习强度,以保持稳定性和准确性。 (2) 在试验阶段,它可以提供一种信心评分,以检测不确定性、低质量甚至ODD样本,不进行额外标签培训。关于FR基准的广泛实验表明,RTS的差异程度是用作O的最小度测算标准。RTS的效益是双倍的。在ODTS测算模型上进行高层次的测算。