In this paper, we improve the single-vehicle 3D object detection models using LiDAR by extending their capacity to process point cloud sequences instead of individual point clouds. In this step, we extend our previous work on rectification of the shadow effect in the concatenation of point clouds to boost the detection accuracy of multi-frame detection models. Our extension includes incorporating HD Map and distilling an Oracle model. Next, we further increase the performance of single-vehicle perception using multi-agent collaboration via Vehicle-to-everything (V2X) communication. We devise a simple yet effective collaboration method that achieves better bandwidth-performance tradeoffs than prior arts while minimizing changes made to single-vehicle detection models and assumptions on inter-agent synchronization. Experiments on the V2X-Sim dataset show that our collaboration method achieves 98% performance of the early collaboration while consuming the equivalent amount of bandwidth usage of late collaboration which is 0.03% of early collaboration. The code will be released at https://github.com/quan-dao/practical-collab-perception.
翻译:暂无翻译