Recent aerial object detection models rely on a large amount of labeled training data, which requires unaffordable manual labeling costs in large aerial scenes with dense objects. Active learning effectively reduces the data labeling cost by selectively querying the informative and representative unlabelled samples. However, existing active learning methods are mainly with class-balanced settings and image-based querying for generic object detection tasks, which are less applicable to aerial object detection scenarios due to the long-tailed class distribution and dense small objects in aerial scenes. In this paper, we propose a novel active learning method for cost-effective aerial object detection. Specifically, both object-level and image-level informativeness are considered in the object selection to refrain from redundant and myopic querying. Besides, an easy-to-use class-balancing criterion is incorporated to favor the minority objects to alleviate the long-tailed class distribution problem in model training. We further devise a training loss to mine the latent knowledge in the unlabeled image regions. Extensive experiments are conducted on the DOTA-v1.0 and DOTA-v2.0 benchmarks to validate the effectiveness of the proposed method. For the ReDet, KLD, and SASM detectors on the DOTA-v2.0 dataset, the results show that our proposed MUS-CDB method can save nearly 75\% of the labeling cost while achieving comparable performance to other active learning methods in terms of mAP.Code is publicly online (https://github.com/ZJW700/MUS-CDB).
翻译:暂无翻译