Active learning(AL) has recently gained popularity for deep learning(DL) models. This is due to efficient and informative sampling, especially when the learner requires large-scale labelled datasets. Commonly, the sampling and training happen in stages while more batches are added. One main bottleneck in this strategy is the narrow representation learned by the model that affects the overall AL selection. We present MoBYv2AL, a novel self-supervised active learning framework for image classification. Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline. Thus, we add the downstream task-aware objective function and optimize it jointly with contrastive loss. Further, we derive a data-distribution selection function from labelling the new examples. Finally, we test and study our pipeline robustness and performance for image classification tasks. We successfully achieved state-of-the-art results when compared to recent AL methods. Code available: https://github.com/razvancaramalau/MoBYv2AL
翻译:积极学习(AL) 近来在深层次学习(DL) 模式中越来越受欢迎。 这是因为对学习者需要大规模标记的数据集, 特别是当学习者需要大量标记的数据集时, 特别是当学习者需要大量信息化的抽样。 通常, 取样和培训会分阶段进行, 并增加更多的批量。 本战略中的一个主要瓶颈是模型所学影响整体AL 选择的狭义表述方式。 我们展示了MOBYV2AL, 这是用于图像分类的新颖的自我监督的积极学习框架。 我们的贡献在于将MOBY( 最成功的自我监督的学习算法之一) 提升到 AL 管道。 因此, 我们添加了下游任务认知目标函数, 并同时优化它与对比性损失 。 此外, 我们从新示例标签中获取数据分配选择功能 。 最后, 我们测试和研究我们的图像分类任务编程是否稳健和性。 我们成功地实现了与最近的 AL 方法相比, 我们成功实现了状态- 。 代码: https://github.com/razvancaramalaual/ MoBY2AL AL AL AL AL 。