Deep neural networks are found to be prone to adversarial examples which could deliberately fool the model to make mistakes. Recently, a few of works expand this task from 2D image to 3D point cloud by using global point cloud optimization. However, the perturbations of global point are not effective for misleading the victim model. First, not all points are important in optimization toward misleading. Abundant points account considerable distortion budget but contribute trivially to attack. Second, the multi-label optimization is suboptimal for adversarial attack, since it consumes extra energy in finding multi-label victim model collapse and causes instance transformation to be dissimilar to any particular instance. Third, the independent adversarial and perceptibility losses, caring misclassification and dissimilarity separately, treat the updating of each point equally without a focus. Therefore, once perceptibility loss approaches its budget threshold, all points would be stock in the surface of hypersphere and attack would be locked in local optimality. Therefore, we propose a local aggressive adversarial attacks (L3A) to solve above issues. Technically, we select a bunch of salient points, the high-score subset of point cloud according to gradient, to perturb. Then a flow of aggressive optimization strategies are developed to reinforce the unperceptive generation of adversarial examples toward misleading victim models. Extensive experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art performance of our method against existing adversarial attack methods.
翻译:深心神经网络被发现容易出现会故意愚弄模型来犯错误的对抗性例子。 最近, 少数作品利用全球点云优化, 将这项任务从 2D 图像扩展为 3D 点云层。 然而, 全球点的扰动对于误导受害者模式并不有效。 首先, 并非所有要点都对优化误导很重要。 宽度点代表了相当的扭曲预算,但却对攻击起到微不足道的作用。 其次, 多标签优化对于对抗性攻击来说并不理想, 因为它在寻找多标签受害者模式崩溃时消耗了额外的精力, 并且导致事件变换与任何特定情况不同。 第三, 独立的对抗性和感知性损失, 照顾错误分类和不相似性, 将每个点的更新一视同仁地用于误导受害者模式。 因此, 一旦感知性损失接近其预算门槛, 所有要点都将储存在超曲线和攻击的表面, 并且将锁定在当地的最佳性。 因此, 我们建议用一个地方侵略性对抗性对抗性攻击(L3A) 来解决上述问题。 技术上, 我们选择一组突出的点, 对抗性和可感知性网络式网络化模型, 高端点的 向后端变型模型 。