With the proposition of neural networks for point clouds, deep learning has started to shine in the field of 3D object recognition while researchers have shown an increased interest to investigate the reliability of point cloud networks by adversarial attacks. However, most of the existing studies aim to deceive humans or defense algorithms, while the few that address the operation principles of the models themselves remain flawed in terms of critical point selection. In this work, we propose two adversarial methods: One Point Attack (OPA) and Critical Traversal Attack (CTA), which incorporate the explainability technologies and aim to explore the intrinsic operating principle of point cloud networks and their sensitivity against critical points perturbations. Our results show that popular point cloud networks can be deceived with almost $100\%$ success rate by shifting only one point from the input instance. In addition, we show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks. Finally, we discuss how our approaches facilitate the explainability study for point cloud networks. To the best of our knowledge, this is the first point-cloud-based adversarial approach concerning explainability. Our code is available at https://github.com/Explain3D/Exp-One-Point-Atk-PC.
翻译:随着神经网络对点云的定位,深层次的学习开始在3D物体识别领域闪亮,而研究人员则更加关注通过对抗性攻击来调查点云网络的可靠性,然而,大多数现有研究的目的是欺骗人类或防御算法,而少数涉及模型本身操作原则的研究在关键点选择方面仍然有缺陷。在这项工作中,我们提出了两种对抗方法:一点攻击(OPA)和临界轨迹攻击(CTA),它们包括了可解释性技术,目的是探索点云网络的内在操作原则及其对临界点扰动的敏感性。我们的结果显示,流行点云网络通过从输入实例中移动一个点而可能受到近100美元的成功率的欺骗。此外,我们展示了不同点属性分布对点云网络对抗性强力的有趣影响。最后,我们讨论了我们的方法如何促进点云网络的可解释性研究。据我们所知,这是关于解释性临界点云网络的第一个基于点的对抗性对抗性方法。我们的代码可以在 https://GIbubrub/Explain3。我们的代码可以在 https-Orma-D.