Deep neural networks(DNNs) is vulnerable to be attacked by adversarial examples. Black-box attack is the most threatening attack. At present, black-box attack methods mainly adopt gradient-based iterative attack methods, which usually limit the relationship between the iteration step size, the number of iterations, and the maximum perturbation. In this paper, we propose a new gradient iteration framework, which redefines the relationship between the above three. Under this framework, we easily improve the attack success rate of DI-TI-MIM. In addition, we propose a gradient iterative attack method based on input dropout, which can be well combined with our framework. We further propose a multi dropout rate version of this method. Experimental results show that our best method can achieve attack success rate of 96.2\% for defense model on average, which is higher than the state-of-the-art gradient-based attacks.
翻译:深神经网络( DNN) 很容易受到对抗性攻击。 黑盒攻击是最有威胁的攻击。 目前, 黑盒攻击方法主要采用基于梯度的迭代攻击方法, 通常限制迭代步骤大小、 迭代次数和最大扰动力之间的关系。 在本文中, 我们提出一个新的梯度迭代框架, 重新定义上述三者之间的关系 。 在此框架下, 我们很容易改进DI- TI- MIM 的攻击成功率 。 此外, 我们提出一种基于输入中断的梯度迭代攻击方法, 这种方法可以与我们的框架相配合。 我们进一步提出这种方法的多重辍学率版本 。 实验结果显示, 我们的最佳方法可以达到平均96. 2 ⁇ 的防御模型攻击成功率, 高于最先进的梯度攻击率 。