With recent developments of convolutional neural networks, deep learning for 3D point clouds has shown significant progress in various 3D scene understanding tasks, e.g., object recognition, semantic segmentation. In a safety-critical environment, it is however not well understood how such deep learning models are vulnerable to adversarial examples. In this work, we explore adversarial attacks for point cloud-based neural networks. We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies. Our method generates adversarial examples by attacking the classification ability of point cloud-based networks while considering the perceptibility of the examples and ensuring the minimal level of point manipulations. Experimental results show that our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively, while manipulating only about 4% of the total points.
翻译:随着革命性神经网络的最近发展,对三维点云的深入学习在各种三维场景理解任务(如物体识别、语义分割等)上取得了显著进展。在安全临界环境中,人们对于这种深学习模式如何容易成为对抗性的例子并不十分了解。在这项工作中,我们探索对点云型神经网络的对抗性攻击。我们提出了对点云生成的统一配方,可以概括两种不同的攻击战略。我们的方法通过攻击点云型网络的分类能力,同时考虑这些实例的可感知性和确保最低限度的点操纵,产生了对抗性的例子。实验结果显示,我们的方法在合成和现实世界数据上分别取得了超过89%和90%的攻击成功率,同时只操纵了总点的4%。