In this paper, we investigate the dynamics-aware adversarial attack problem in deep neural networks. Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process. However, this assumption does not hold for many recently proposed networks, e.g. 3D sparse convolution network, which contains input-dependent execution to improve computational efficiency. It results in a serious issue of lagged gradient, making the learned attack at the current step ineffective due to the architecture changes afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient. More specifically, we re-formulate the gradients to be aware of the potential dynamic changes of network architectures, so that the learned attack better "leads" the next step than the dynamics-unaware methods when network architecture changes dynamically. Extensive experiments on various datasets show that our LGM achieves impressive performance on semantic segmentation and classification. Compared with the dynamic-unaware methods, LGM achieves about 20% lower mIoU averagely on the ScanNet and S3DIS datasets. LGM also outperforms the recent point cloud attacks.
翻译:在本文中,我们调查深神经网络中的动态觉醒对抗性攻击问题。 大部分现有的对抗性攻击算法都是在基本假设下设计的 -- 网络结构在整个攻击过程中都是固定的。 但是,这一假设并不支持最近提出的许多网络, 例如 3D 分散的卷变网络, 其中包括以投入为依存的操作来提高计算效率。 它导致一个严重的延迟梯度问题, 使得由于结构变化之后的结构变化, 在当前步骤上所学的攻击变得无效。 为了解决这个问题, 我们建议了一种铅梯度方法(LGM), 并展示了滞后梯度的重大影响。 更具体地说, 我们重新配置梯度是为了了解网络结构的潜在动态变化, 因此当网络结构发生动态变化时, 学习到的攻击“ 引领” 比动态- 软件方法的下一步更好。 对各种数据集的广泛实验显示, 我们的LGM在语系分化和分类上取得了令人印象深刻的性能。 与动态的软件方法相比, LGMM在扫描网和S3DIS数据系统上的平均攻击点上达到大约20% mIOU。