Scene flow allows autonomous vehicles to reason about the arbitrary motion of multiple independent objects which is the key to long-term mobile autonomy. While estimating the scene flow from LiDAR has progressed recently, it remains largely unknown how to estimate the scene flow from a 4D radar - an increasingly popular automotive sensor for its robustness against adverse weather and lighting conditions. Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution. Annotated datasets for radar scene flow are also in absence and costly to acquire in the real world. These factors jointly pose the radar scene flow estimation as a challenging problem. This work aims to address the above challenges and estimate scene flow from 4D radar point clouds by leveraging self-supervised learning. A robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data. Real-world experimental results validate that our method is able to robustly estimate the radar scene flow in the wild and effectively supports the downstream task of motion segmentation.
翻译:光流使得自动飞行器能够了解多个独立物体的任意移动,这是长期移动自主的关键。 估计来自LiDAR的场景流动最近有所进展, 但对于如何从4D雷达(一种日益流行的汽车传感器)中估计场景流动,因为4D雷达是针对恶劣天气和照明条件的一种越来越受欢迎的汽车传感器。 与LIDAR点云相比,雷达数据非常稀少、 注意度高、 分辨率低得多。 在现实世界中,雷达场景流动的附加说明数据集也缺乏而且成本高。 这些因素共同构成雷达场景流动的估计,是一个具有挑战性的问题。 这项工作的目的是利用自我监督的学习来应对上述挑战,并估计4D雷达点云的场景流动。 一个强大的场景流量估计架构和三个新的损失是用来应对难用雷达数据的新设计的。 现实世界实验结果证实,我们的方法能够强有力地估计野生雷达场流,并有效地支持下游任务。