In this paper, we investigate the dynamics-aware adversarial attack problem of adaptive neural networks. Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process. However, this assumption does not hold for many recently proposed adaptive neural networks, which adaptively deactivate unnecessary execution units based on inputs to improve computational efficiency. It results in a serious issue of lagged gradient, making the learned attack at the current step ineffective due to the architecture change afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient. More specifically, we reformulate the gradients to be aware of the potential dynamic changes of network architectures, so that the learned attack better "leads" the next step than the dynamics-unaware methods when network architecture changes dynamically. Extensive experiments on representative types of adaptive neural networks for both 2D images and 3D point clouds show that our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
翻译:在本文中,我们调查了适应性神经网络的动态觉醒对抗攻击问题。大多数现有的对抗性攻击算法都是在基本假设下设计的 -- -- 网络结构在整个攻击过程中都是固定的。然而,这一假设对最近提出的许多适应性神经网络并不有效,这些网络根据提高计算效率的投入而适应性地禁用不必要的执行单位。它导致一个严重的延迟梯度问题,使当前阶段的学习性攻击由于结构变化而变得无效。为了解决这个问题,我们建议了铅梯度方法(LGM),并展示了落后梯度的重大影响。更具体地说,我们重新配置梯度是为了了解网络结构的潜在动态变化,这样,当网络结构动态变化时,学习的“引导”比动态软件方法的下一个步骤更好。关于2D图像和3D点云的具有代表性的适应性神经网络的广泛实验表明,我们的LGM在与动态非软件攻击方法相比,取得了令人印象深刻的对抗性攻击性攻击性表现。