We introduce supervised contrastive active learning (SCAL) and propose efficient query strategies in active learning based on the feature similarity (featuresim) and principal component analysis based feature-reconstruction error (fre) to select informative data samples with diverse feature representations. We demonstrate our proposed method achieves state-of-the-art accuracy, model calibration and reduces sampling bias in an active learning setup for balanced and imbalanced datasets on image classification tasks. We also evaluate robustness of model to distributional shift derived from different query strategies in active learning setting. Using extensive experiments, we show that our proposed approach outperforms high performing compute-intensive methods by a big margin resulting in 9.9% lower mean corruption error, 7.2% lower expected calibration error under dataset shift and 8.9% higher AUROC for out-of-distribution detection.
翻译:我们引入了对比式积极学习(SCAL),并根据特征相似性(地物成像)和基于特征重建的主要组成部分分析错误(fre),在积极学习中提出高效的查询战略,以选择具有不同特征表现的信息性数据样本。我们展示了我们提议的方法达到了最新水平的准确性、模型校准,并在一个积极的学习结构中减少了抽样偏差,以便在图像分类任务方面实现平衡和不平衡的数据集。我们还评估了模型对从积极学习环境中的不同查询战略中产生的分布式转换的稳健性。我们通过广泛的实验,表明我们拟议的方法通过一个大差幅,表现得优于高性计算密集方法,导致9.9%的中度腐败错误、7.2%的低预期校准差,以及8.9%的AUROC用于分配外检测。