Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) guarantee convergence to a stationary point. In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate $O(1/K)$, which improves upon the rate $O(1/K^{1/2})$ of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.
翻译:通过在自然投入中增加不可察觉的扰动而制作的反向实例很容易愚弄深神经网络(DNN) 。 最成功的方法之一是用对抗性培训算法解决非碳-非碳-非碳-氮-微型负鼠轴问题。然而,在许多AT算法中,只有动态AT(DAT)和“只发出”一次(YOPO)才能保证与固定点的趋同。在这项工作中,我们推广了随机原始的初等混合梯度算法,以开发半隐性混合梯度方法(SI-HGs),用以寻找非碳-非碳-氮-氮微型负尾问题的固定点。 SI-HGs的趋同率为$(1/K),这比DAT和YOPO的汇率($(1/K ⁇ 1/2)提高了。 我们设计了一个SI-HGs的实用变式,并显示它比其他AT的趋同速度和稳健性。