With the proposition of neural networks for point clouds, deep learning has started to shine in the field of 3D object recognition while researchers have shown an increased interest to investigate the reliability of point cloud networks by fooling them with perturbed instances. However, most studies focus on the imperceptibility or surface consistency, with humans perceiving no perturbations on the adversarial examples. This work proposes two new attack methods: opa and cta, which go in the opposite direction: we restrict the perturbation dimensions to a human cognizable range with the help of explainability methods, which enables the working principle or decision boundary of the models to be comprehensible through the observable perturbation magnitude. Our results show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance. In addition, we attempt to provide a more persuasive viewpoint of comparing the robustness of point cloud models against adversarial attacks. We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks. Finally, we discuss how our approaches facilitate the explainability study for point cloud networks. To the best of our knowledge, this is the first point-cloud-based adversarial approach concerning explainability. Our code is available at https://github.com/Explain3D/Exp-One-Point-Atk-PC.
翻译:随着神经网络对点云的主张,深层次的学习开始在3D物体识别领域闪亮,而研究人员则表示更加有兴趣调查点云网络的可靠性,以令人窥视的场景愚弄这些云网络。然而,大多数研究侧重于不可见性或表面一致性,而人类对对抗性例子没有感到任何扰动。这项工作提出了两种新的攻击方法:Opa和cta,它们朝着相反的方向发展:我们把扰动层面限制在可识别的人类范围上,并借助于可解释性方法,使模型的工作原理或决定界限能够通过可见的扰动强度来理解。我们的结果显示,流行点云网络可以通过几乎100%的成功率来欺骗,而从输入实例中只移动一个点。此外,我们试图提供更有说服力的观点,比较点云模型对对抗性攻击的稳健性。我们还展示了不同点属性分布对点云网络的对抗性强性影响。最后,我们讨论了我们的方法如何便利对点云网络的可解释性研究。我们所掌握的最佳的代码是http-clove-creditional-creditional3。