Adversarial attacks make their success in "fooling" DNNs and among them, gradient-based algorithms become one of the mainstreams. Based on the linearity hypothesis [12], under $\ell_\infty$ constraint, $sign$ operation applied to the gradients is a good choice for generating perturbations. However, the side-effect from such operation exists since it leads to the bias of direction between the real gradients and the perturbations. In other words, current methods contain a gap between real gradients and actual noises, which leads to biased and inefficient attacks. Therefore in this paper, based on the Taylor expansion, the bias is analyzed theoretically and the correction of $\sign$, i.e., Fast Gradient Non-sign Method (FGNM), is further proposed. Notably, FGNM is a general routine, which can seamlessly replace the conventional $sign$ operation in gradient-based attacks with negligible extra computational cost. Extensive experiments demonstrate the effectiveness of our methods. Specifically, ours outperform them by \textbf{27.5\%} at most and \textbf{9.5\%} on average. Our anonymous code is publicly available: \url{https://git.io/mm-fgnm}.
翻译:Aversarial 攻击在“ 威胁” DNN 中取得了成功,在其中,梯度算法成为主流之一。根据线性假设[12],在$\ell ⁇ infty$的限制下,对梯度应用的美元操作是产生扰动的良好选择。然而,这种操作的副作用是存在的,因为它导致真实梯度与扰动之间方向偏差。换句话说,目前的方法包含真实梯度与实际噪音之间的差距,导致偏差和效率低下的攻击。因此,在本文中,基于泰勒扩张,从理论上分析偏差,并进一步提议对美元(即快速梯度非标志方法)进行校正。值得注意的是,FGNM是一种普通常规,可以用微不足道的超计算成本无缝地取代基于梯度的攻击中常规的美元操作。广泛的实验证明了我们的方法的有效性。具体地说,我们的方法以\ textb {27.5\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\