It has been consistently reported that many machine learning models are susceptible to adversarial attacks i.e., small additive adversarial perturbations applied to data points can cause misclassification. Adversarial training using empirical risk minimization is considered to be the state-of-the-art method for defense against adversarial attacks. Despite being successful in practice, several problems in understanding generalization performance of adversarial training remain open. In this paper, we derive precise theoretical predictions for the performance of adversarial training in binary classification. We consider the high-dimensional regime where the dimension of data grows with the size of the training data-set at a constant ratio. Our results provide exact asymptotics for standard and adversarial errors of the estimators obtained by adversarial training with $\ell_q$-norm bounded perturbations ($q \ge 1$) for both discriminative binary models and generative Gaussian mixture models. Furthermore, we use these sharp predictions to uncover several intriguing observations on the role of various parameters including the over-parameterization ratio, the data model, and the attack budget on the adversarial and standard errors.
翻译:一直有人报告说,许多机器学习模式容易受到对抗性攻击,即对数据点适用的小型补充性对抗性干扰可能会导致错误分类。使用实证风险最小化的反向培训被认为是对抗性攻击的最先进的防御方法。尽管在实践中是成功的,但在理解对抗性训练一般化表现方面仍然存在一些问题。在本文中,我们对二元分类中的对抗性训练表现得出精确的理论预测。我们认为,数据层面随着培训数据设置的大小不断增大而增长的高维系统。我们的结果提供了精确的测试性,说明在以$\ell_q$-norm约束性干扰训练中获得的估测标准差和对抗性差错($@q\ge 1美元)方面,对歧视性的二进制模型和基因化高斯混合模型都存在一些问题。此外,我们利用这些精确预测来发现关于各种参数作用的一些令人感兴趣的观察,包括超分度比、数据模型和对敌对性和标准错误预算的攻击。