Despite substantial progress in 3D object detection, advanced 3D detectors often suffer from heavy computation overheads. To this end, we explore the potential of knowledge distillation (KD) for developing efficient 3D object detectors, focusing on popular pillar- and voxel-based detectors.In the absence of well-developed teacher-student pairs, we first study how to obtain student models with good trade offs between accuracy and efficiency from the perspectives of model compression and input resolution reduction. Then, we build a benchmark to assess existing KD methods developed in the 2D domain for 3D object detection upon six well-constructed teacher-student pairs. Further, we propose an improved KD pipeline incorporating an enhanced logit KD method that performs KD on only a few pivotal positions determined by teacher classification response, and a teacher-guided student model initialization to facilitate transferring teacher model's feature extraction ability to students through weight inheritance. Finally, we conduct extensive experiments on the Waymo dataset. Our best performing model achieves $65.75\%$ LEVEL 2 mAPH, surpassing its teacher model and requiring only $44\%$ of teacher flops. Our most efficient model runs 51 FPS on an NVIDIA A100, which is $2.2\times$ faster than PointPillar with even higher accuracy. Code is available at \url{https://github.com/CVMI-Lab/SparseKD}.
翻译:尽管在3D天体探测方面取得了长足进展,先进的3D探测器往往会受到重度计算间接费用的沉重影响。为此,我们探索了知识蒸馏(KD)的潜力,以开发高效的3D天体探测器,重点是流行的支柱和基于 voxel 的探测器。在缺乏完善的师生配对的情况下,我们首先研究如何从模型压缩和投入解析减少的角度获得精确度和效率之间有良好交换的学生模型。然后,我们建立了一个基准,以评估2D域为6对完善的师生夫妇的3D天体探测开发的现有KD方法。此外,我们建议改进KD管道,采用经改进的KD日志方法,仅对教师分类反应确定的少数关键位置进行KD操作,以及由教师指导的学生模型初始化,以便利将教师模型的特征提取能力通过体重继承转移给学生。最后,我们在Waymomated数据集上进行广泛的实验。我们最好的执行模型达到2 mAPH,甚至超过其教师型号为BUBD+100美元,我们最高效的教师代码是51美元。