Deep neural networks are vulnerable to adversarial attacks. Recent studies about adversarial robustness focus on the loss landscape in the parameter space since it is related to optimization and generalization performance. These studies conclude that the difficulty of adversarial training is caused by the non-smoothness of the loss function: i.e., its gradient is not Lipschitz continuous. However, this analysis ignores the dependence of adversarial attacks on model parameters. Since adversarial attacks are optimized for models, they should depend on the parameters. Considering this dependence, we analyze the smoothness of the loss function of adversarial training using the optimal attacks for the model parameter in more detail. We reveal that the constraint of adversarial attacks is one cause of the non-smoothness and that the smoothness depends on the types of the constraints. Specifically, the $L_\infty$ constraint can cause non-smoothness more than the $L_2$ constraint. Moreover, our analysis implies that if we flatten the loss function with respect to input data, the Lipschitz constant of the gradient of adversarial loss tends to increase. To address the non-smoothness, we show that EntropySGD smoothens the non-smooth loss and improves the performance of adversarial training.
翻译:深心神经网络很容易受到对抗性攻击。 最近关于对抗性强力的研究侧重于参数空间的损耗景观,因为它与优化和概括性性性能有关。这些研究的结论是,对抗性训练的难度是由损失功能不松动造成的:即其梯度不是Lipschitz的连续性。然而,这一分析忽略了对抗性攻击对模型参数的依赖性。由于对抗性攻击是最佳模型的参数,它们应该取决于参数。考虑到这种依赖性,我们更详细地分析对抗性训练的损失功能的平稳性能,使用最优攻击模型参数的最佳攻击来分析。我们发现,对抗性攻击的制约因素是非松动性的原因之一,而平稳性能取决于各种制约性能的类型。具体地说,$Läinfty(美元)的制约性能可导致非脉冲性攻击大于$L_2美元的制约性能。此外,我们的分析还表明,如果我们在输入数据方面缩小损失功能,则使用对抗性损失加速度的利普西茨(利普西茨)的加速性能会增加。为了解决非摩性SG的不平稳性培训,我们显示的是,平稳性损失。</s>