LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions. When autonomous vehicles are sending LiDAR point clouds to deep networks for perception and planning, could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation that is susceptible to wireless spoofing? We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough to make safety-critical objects undetectable or detected with incorrect positions. Moreover, polynomial trajectory perturbation is developed to achieve a temporally-smooth and highly-imperceptible attack. Extensive experiments on 3D object detection have shown that such attacks not only lower the performance of the state-of-the-art detectors effectively, but also transfer to other detectors, raising a red flag for the community. The code is available on https://ai4ce.github.io/FLAT/.
翻译:从移动车辆中收集的LiDAR点云是其轨迹的功能,因为传感器运动需要补偿以避免扭曲。当自动车辆将LiDAR点点云送入深网络进行感知和规划时,运动补偿能否因此成为这些网络中一个开阔的后门,因为深层学习和基于GPS的车辆轨迹估计容易受到无线潜伏攻击的对抗性脆弱性?我们第一次展示了这种可能性:直接攻击点云坐标需要篡改原始LIDAR的读数,而不是直接攻击点云坐标,而只用小扰动器对自驾驶车轨迹进行对抗式的模拟,足以使安全临界物体无法探测或以错误的姿势探测。此外,还开发了多角度轨道扰动,以达到时间吸附和高度易感知的攻击。关于3D物体探测的广泛实验显示,这种攻击不仅会有效降低州级探测器的性能,而且会转移到其他探测器,提高社区的红旗。代码可在 https://FLATS. 4上查阅。