Deep learning models are the state-of-the-art methods for semantic point cloud segmentation, the success of which relies on the availability of large-scale annotated datasets. However, it can be extremely time-consuming and prohibitively expensive to compile such datasets. In this work, we propose an active learning approach to maximize model performance given limited annotation budgets. We investigate the appropriate sample granularity for active selection under realistic annotation cost measurement (clicks), and demonstrate that super-point based selection allows for more efficient usage of the limited budget compared to point-level and instance-level selection. We further exploit local consistency constraints to boost the performance of the super-point based approach. We evaluate our methods on two benchmarking datasets (ShapeNet and S3DIS) and the results demonstrate that active learning is an effective strategy to address the high annotation costs in semantic point cloud segmentation.
翻译:深层学习模型是语义点云分解的最先进方法,其成功与否取决于是否有大规模附加说明的数据集。然而,汇编这类数据集可能耗时费时,费用太高。在这项工作中,我们建议采取积极的学习方法,在有限的批注预算下最大限度地发挥模型性能。我们调查了在现实的注解成本测量(点击)下积极选择的适当的样本颗粒度,并表明基于超级点的选定能够更有效地使用有限的预算,而与点点和实例一级的选择相比。我们进一步利用地方一致性限制来提高基于超级点的方法的性能。我们评估了我们使用两个基准数据集(ShapeNet和S3DIS)的方法,结果表明积极学习是解决语义点云分解的高注成本的有效战略。