To alleviate the resource constraint for real-time point cloud applications that run on edge devices, in this paper we present BiPointNet, the first model binarization approach for efficient deep learning on point clouds. We discover that the immense performance drop of binarized models for point clouds mainly stems from two challenges: aggregation-induced feature homogenization that leads to a degradation of information entropy, and scale distortion that hinders optimization and invalidates scale-sensitive structures. With theoretical justifications and in-depth analysis, our BiPointNet introduces Entropy-Maximizing Aggregation (EMA) to modulate the distribution before aggregation for the maximum information entropy, and Layer-wise Scale Recovery (LSR) to efficiently restore feature representation capacity. Extensive experiments show that BiPointNet outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones. Moreover, BiPointNet gives an impressive 14.7x speedup and 18.9x storage saving on real-world resource-constrained devices.
翻译:为了减轻在边缘装置上运行的实时点云应用的资源限制,我们在本文件中介绍了BiPointNet,这是在点云上高效深习的第一个模型二进制方法。我们发现点云二进制模型的大规模性能下降主要来自两个挑战:聚合导致信息变异的特性同质化,导致信息变异,阻碍优化和取消对比例敏感结构的规模扭曲。通过理论解释和深入分析,我们的BiPointNet引入了Entropy-Meximizion Agrication(EMA),在为最大信息变异聚合之前对分布进行调控,并为高效恢复地貌显示能力而采用以图解的图解(LSR)。广泛的实验显示,BipointNet在甚至与完全精确对应方相当的水平上,通过令人信服的边距,超越了现有的二进制方法。我们的技术是通用的,保证在各种基本任务和主流骨架上作出重大改进。此外,BipointNet提供了令人印象深刻的14.7x速度和18.9x存储量保存真实世界资源控制装置。