Neural networks have been proven to be both highly effective within computer vision, and highly vulnerable to adversarial attacks. Consequently, as the use of neural networks increases due to their unrivaled performance, so too does the threat posed by adversarial attacks. In this work, we build towards addressing the challenge of adversarial robustness by exploring the relationship between the mini-batch size used during adversarial sample generation and the strength of the adversarial samples produced. We demonstrate that an increase in mini-batch size results in a decrease in the efficacy of the samples produced, and we draw connections between these observations and the phenomenon of vanishing gradients. Next, we formulate loss functions such that adversarial sample strength is not degraded by mini-batch size. Our findings highlight a potential risk for underestimating the true (practical) strength of adversarial attacks, and a risk of overestimating a model's robustness. We share our codes to let others replicate our experiments and to facilitate further exploration of the connections between batch size and adversarial sample strength.
翻译:神经网络在计算机视觉领域内已被证明具有高度的效率,但也对对抗攻击高度脆弱。因此随着神经网络的使用量因其无与伦比的性能而增加,对抗攻击所带来的威胁也随之增加。本文旨在通过探究在对抗样本生成过程中的小批量大小与对抗样本强度之间的关系来解决对抗鲁棒性的挑战。我们证明,小批量大小的增加导致生成的样本效力下降,并在这些观察结果与梯度消失这一现象之间提出了相关性。接下来,我们制定了损失函数,以使小批量大小不会降低对抗性样本的强度。我们的发现强调了低估对抗攻击真正(实际)强度和高估模型鲁棒性的潜在风险。我们分享我们的代码以便其他人复制我们的实验并便于进一步探索小批量大小与对抗样本强度之间的联系。