It has been consistently reported that many machine learning models are susceptible to adversarial attacks i.e., small additive adversarial perturbations applied to data points can cause misclassification. Adversarial training using empirical risk minimization is considered to be the state-of-the-art method for defense against adversarial attacks. Despite being successful in practice, several problems in understanding generalization performance of adversarial training remain open. In this paper, we derive precise theoretical predictions for the performance of adversarial training in binary classification. We consider the high-dimensional regime where the dimension of data grows with the size of the training data-set at a constant ratio. Our results provide exact asymptotics for standard and adversarial test errors of the estimators obtained by adversarial training with $\ell_q$-norm bounded perturbations ($q \ge 1$) for both discriminative binary models and generative Gaussian-mixture models with correlated features. Furthermore, we use these sharp predictions to uncover several intriguing observations on the role of various parameters including the over-parameterization ratio, the data model, and the attack budget on the adversarial and standard errors.
翻译:一直有人报告说,许多机器学习模式容易受到对抗性攻击,即对数据点适用的小型补充性对抗性干扰可能会导致错误分类。使用实证风险最小化的反向培训被认为是对抗性攻击的最先进的防御方法。尽管在实践中是成功的,但在理解对抗性训练一般化表现方面仍然存在一些问题。在本文中,我们为二进制训练的表现得出精确的理论预测。我们考虑到高维制度,在这个制度下,数据层面随着培训数据设置的大小而不断增长。我们的结果提供了精确的测试性,用于对以$\ell_q$-normited arburbation(Qq ge 1美元)进行对抗性训练获得的标准和对抗性测试错误进行测试。对于具有相关特点的歧视性双进制模型和基因化高斯混合模型,我们利用这些精确的预测来发现关于各种参数作用的一些观察,包括超分辨误差率率、模型、攻击和预算标准。