Deep neural networks have reached high accuracy on object detection but their success hinges on large amounts of labeled data. To reduce the labels dependency, various active learning strategies have been proposed, typically based on the confidence of the detector. However, these methods are biased towards high-performing classes and can lead to acquired datasets that are not good representatives of the testing set data. In this work, we propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector, ensuring that the network performs well in all classes. Furthermore, our method leverages auto-labeling to suppress a potential distribution drift while boosting the performance of the model. Experiments on PASCAL VOC07+12 and MS-COCO show that our method consistently outperforms a wide range of active learning methods, yielding up to a 7.7% improvement in mAP, or up to 82% reduction in labeling cost. Code will be released upon acceptance of the paper.
翻译:深神经网络在物体探测方面达到了很高的精确度,但其成功取决于大量标签数据。为了减少标签依赖性,提出了各种积极的学习策略,通常以探测器的信心为基础。然而,这些方法偏向于高性能类,并可能导致获得的数据集不能很好地代表测试数据集数据。在这项工作中,我们提议了一个积极学习的统一框架,既考虑到探测器的不确定性,又考虑到探测器的坚固性,确保网络在所有类别中运行良好。此外,我们的方法利用自动标签来抑制潜在的分布流,同时提升模型的性能。关于PACAL VOC07+12和MS-COCOCO的实验显示,我们的方法始终超越了广泛的积极学习方法,使MAP得到7.7%的改进,或降低82%的标签成本。一旦被接受,将发布代码。