Active Learning (AL) is a set of techniques for reducing labeling cost by sequentially selecting data samples from a large unlabeled data pool for labeling. Meanwhile, Deep Learning (DL) is data-hungry, and the performance of DL models scales monotonically with more training data. Therefore, in recent years, Deep Active Learning (DAL) has risen as feasible solutions for maximizing model performance while minimizing the expensive labeling cost. Abundant methods have sprung up and literature reviews of DAL have been presented before. However, the performance comparison of different branches of DAL methods under various tasks is still insufficient and our work fills this gap. In this paper, we survey and categorize DAL-related work and construct comparative experiments across frequently used datasets and DAL algorithms. Additionally, we explore some factors (e.g., batch size, number of epochs in the training process) that influence the efficacy of DAL, which provides better references for researchers to design their own DAL experiments or carry out DAL-related applications. We construct a DAL toolkit, DeepAL+, by re-implementing many highly-cited DAL-related methods, and it will be released to the public.
翻译:积极学习(AL)是一系列降低标签成本的技术,通过从大型无标签数据库中按顺序选择数据样本来降低标签成本。与此同时,深学习(DL)是数据饥饿,DL模型的性能单以更多的培训数据衡量。因此,近年来,深积极学习(DAL)作为最大限度地发挥模型性能的可行解决办法,同时尽量减少昂贵的标签成本。大量方法已经涌现,DAL的文献审查以前已经提出。然而,对不同任务下DAL方法不同分支的性能比较仍然不够充分,我们的工作填补了这一空白。在本文件中,我们调查DAL相关工作并分类,在经常使用的数据集和DAL算法中构建比较实验。此外,我们探索了影响DAL效率的一些因素(例如批量尺寸、培训过程中的切片数目),这些因素为研究人员设计自己的DAL实验或开展DAL相关应用提供了更好的参考。我们通过重新实施许多与DAL相关的方法,我们建造DAL工具包(DeepAL+),通过重新实施许多与DAL相关的方法。