With recent developments of convolutional neural networks, deep learning for 3D point clouds has shown significant progress in various 3D scene understanding tasks, e.g., object recognition, object detection. In a safety-critical environment, it is however not well understood how such deep learning models are vulnerable to adversarial examples. In this work, we explore adversarial attacks for point cloud-based neural networks. We propose a general formulation for adversarial point cloud generation via $\ell_0$-norm optimisation. Our method generates adversarial examples by attacking the classification ability of the point cloud-based networks while considering the perceptibility of the examples and ensuring the minimum level of point manipulations. The proposed method is general and can be realised in different attack strategies. Experimental results show that our method achieves the state-of-the-art performance with higher than 89\% and 90\% of attack success on synthetic and real-world data respectively, while manipulating only about 4\% of the total points.
翻译:随着进化神经网络的最近发展,对3D点云的深入学习显示,在各种3D场景理解任务(如物体识别、物体探测等)上取得了显著进展。在安全临界环境中,这种深学习模式如何容易成为对抗性例子,但人们并不十分了解。在这项工作中,我们探索对点云型神经网络的对抗性攻击。我们建议了对点云生成的一般配方,即以$@ell_0$-norm优化方式产生对抗性例子。我们的方法通过攻击点云型网络的分类能力,同时考虑这些实例的可感知性并确保最低水平的点操纵。提议的方法是一般性的,可以在不同的攻击战略中实现。实验结果显示,我们的方法在合成和现实世界数据的攻击成功率上分别达到89 ⁇ 和90 ⁇ 以上,同时只操纵总点的大约4 ⁇ 。