Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving. Robust training and defend-by-denoise are typical strategies for defending adversarial perturbations, including adversarial training and statistical filtering, respectively. However, they either induce massive computational overhead or rely heavily upon specified noise priors, limiting generalized robustness against attacks of all kinds. This paper introduces a new defense mechanism based on denoising diffusion models that can adaptively remove diverse noises with a tailored intensity estimator. Specifically, we first estimate adversarial distortions by calculating the distance of the points to their neighborhood best-fit plane. Depending on the distortion degree, we choose specific diffusion time steps for the input point cloud and perform the forward diffusion to disrupt potential adversarial shifts. Then we conduct the reverse denoising process to restore the disrupted point cloud back to a clean distribution. This approach enables effective defense against adaptive attacks with varying noise budgets, achieving accentuated robustness of existing 3D deep recognition models.
翻译:深 3D 点云模型对对抗性攻击十分敏感,对自动驾驶等安全关键应用构成了威胁。 强力培训和逐个防御是防御对抗性扰动的典型战略, 包括对抗性培训和统计过滤。 但是, 它们要么引起大规模计算性间接费用, 要么严重依赖指定的噪声前程, 限制对各种攻击的普遍稳健性。 本文引入了一种新的防御机制, 其基础是分辨性传播模型, 能够通过量身定制的强度测量仪, 适应性地清除各种噪音。 具体地说, 我们首先通过计算点点距离附近最合适的平面来估计对抗性扭曲。 根据扭曲程度, 我们选择输入点云的具体扩散时间步骤, 并进行前方扩散, 以破坏潜在的对抗性转移。 然后我们进行反向消散射过程, 将被干扰的点云恢复到清洁的分布。 这种方法可以有效防范适应性攻击, 以不同的噪音预算, 实现现有的 3D 深度识别模型的突出稳健度 。