Deep neural networks are prone to adversarial examples that maliciously alter the network's outcome. Due to the increasing popularity of 3D sensors in safety-critical systems and the vast deployment of deep learning models for 3D point sets, there is a growing interest in adversarial attacks and defenses for such models. So far, the research has focused on the semantic level, namely, deep point cloud classifiers. However, point clouds are also widely used in a geometric-related form that includes encoding and reconstructing the geometry. In this work, we are the first to consider the problem of adversarial examples at a geometric level. In this setting, the question is how to craft a small change to a clean source point cloud that leads, after passing through an autoencoder model, to the reconstruction of a different target shape. Our attack is in sharp contrast to existing semantic attacks on 3D point clouds. While such works aim to modify the predicted label by a classifier, we alter the entire reconstructed geometry. Additionally, we demonstrate the robustness of our attack in the case of defense, where we show that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input. Our code is publicly available at https://github.com/itailang/geometric_adv.
翻译:深心神经网络容易出现恶意改变网络结果的对抗性例子。 由于三维传感器在安全临界系统中越来越受欢迎,3D点集的深度学习模型部署范围越来越广,对对抗性攻击和这种模型的防御越来越感兴趣。 到目前为止,研究的重点放在语义层面,即深点云分解器。然而,点云也广泛用于几何相关形式,其中包括对几何进行编码和重建。在这项工作中,我们首先考虑几何级别的对抗性例子问题。在这种背景下,问题是如何将小幅变化设计成清洁源点云,在通过自动编码模型后,导致重建不同目标形状。我们的攻击与现有的3D点云的语义攻击形成鲜明对比。尽管这种工作的目的是用一个叙级器来修改预测的标签,我们改变整个重建后的几何学。此外,我们展示了我们在防御方面攻击的稳健健性,在那里我们展示了重新设定的对立/对立度模型的特性。我们的目标形状在目前对立式的输出中仍然处于可调制中。