Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset. AL is especially impactful in industrial scale settings where data labeling costs are high and practitioners use every tool at their disposal to improve model performance. The recent success of self-supervised pretraining (SSP) highlights the importance of harnessing abundant unlabeled data to boost model performance. By combining AL with SSP, we can make use of unlabeled data while simultaneously labeling and training on particularly informative samples. In this work, we study a combination of AL and SSP on ImageNet. We find that performance on small toy datasets -- the typical benchmark setting in the literature -- is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner. Among the existing baselines we test, popular AL algorithms across a variety of small and large scale settings fail to outperform random sampling. To remedy the class-imbalance problem, we propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently by selecting more balanced samples for annotation than existing methods. Our code is available at: https://github.com/zeyademam/active_learning .
翻译:积极学习(AL) 算法旨在确定用于说明的最佳数据子集, 以便深神经网络(DNNN)在就这个标签子集进行培训时能够取得更好的性能。 AL在工业规模设置中影响特别大, 因为在工业规模中,数据标签成本高, 实践者使用他们所掌握的各种工具来改进模型性能。 自我监督的预培训(SSP)最近的成功凸显了利用丰富的无标签数据来提升模型性能的重要性。 通过将AL和SSP结合起来, 我们可以使用未贴标签的数据, 同时在特别信息样本上进行标签和培训。 在这项工作中, 我们研究了AL和SSP在图像网络上的组合。 我们发现,小型玩具数据集 -- -- 文献中的典型基准设置 -- -- 的性能不能代表图像网络上的性能,因为一个活跃的学习者选择了阶级不平衡的样本。 在现有的基线中,我们测试各种小型和大型环境的流行的AL 算法不能超过随机抽样。 为了纠正等级平衡性选择(BASE), 我们建议一种简单、 缩略的AL AL 算法比现有抽样法更加均衡的样本。