Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks. This increases the importance of robust training of 3D models in the face of adversarial attacks. Based on our analysis on the performance of existing adversarial attacks, more adversarial perturbations are found in the mid and high-frequency components of input data. Therefore, by suppressing the high-frequency content in the training phase, the models robustness against adversarial examples is improved. Experiments showed that the proposed defense method decreases the success rate of six attacks on PointNet, PointNet++ ,, and DGCNN models. In particular, improvements are achieved with an average increase of classification accuracy by 3.8 % on drop100 attack and 4.26 % on drop200 attack compared to the state-of-the-art methods. The method also improves models accuracy on the original dataset compared to other available methods.
翻译:虽然最近在不同的应用情景中广泛采用了3D点云分类,但它仍然极易受到对抗性攻击的影响。这增加了在对抗性攻击面前对3D模型进行强力培训的重要性。根据我们对现有对抗性攻击表现的分析,输入数据中和高频部分中发现更多的对抗性扰动。因此,通过在培训阶段抑制高频内容,模型对对抗性攻击实例的稳健性得到了改进。实验显示,拟议的防御方法降低了对PointNet、PointNet++和DGCNN模型的六次攻击的成功率。特别是,与最新方法相比,平均将降100次攻击的分类精确度提高3.8%,将降200次攻击的分类精确度提高4.26%,从而取得了改进。这种方法还提高了原始数据集的模型准确性,与其他可用方法相比,还提高了原始数据集的模型的准确性。