Annotating objects with 3D bounding boxes in LiDAR pointclouds is a costly human driven process in an autonomous driving perception system. In this paper, we present a method to semi-automatically annotate real-world pointclouds collected by deployment vehicles using simulated data. We train a 3D object detector model on labeled simulated data from CARLA jointly with real world pointclouds from our target vehicle. The supervised object detection loss is augmented with a CORAL loss term to reduce the distance between labeled simulated and unlabeled real pointcloud feature representations. The goal here is to learn representations that are invariant to simulated (labeled) and real-world (unlabeled) target domains. We also provide an updated survey on domain adaptation methods for pointclouds.
翻译:在LiDAR 点球仪中用3D捆绑框标注物体是自主驱动感知系统中由人类驱动的一个费用高昂的过程。 在本文中,我们提出了一个方法,用模拟数据对部署车辆收集的真实世界点球进行半自动注解。我们用目标车辆上的真实世界点球对来自CARLA的标签模拟数据进行了3D物体探测器模型培训。监督的物体探测损失增加了CORAL损失术语,以缩短贴有标签的模拟和未贴标签的真实点球特征显示之间的距离。这里的目的是了解模拟(标签)和真实世界(未贴标签)目标域的变量。我们还就点球的域适应方法提供了最新的调查。