We study the error of linear regression in the face of adversarial attacks. In this framework, an adversary changes the input to the regression model in order to maximize the prediction error. We provide bounds on the prediction error in the presence of an adversary as a function of the parameter norm and the error in the absence of such an adversary. We show how these bounds make it possible to study the adversarial error using analysis from non-adversarial setups. The obtained results shed light on the robustness of overparameterized linear models to adversarial attacks. Adding features might be either a source of additional robustness or brittleness. On the one hand, we use asymptotic results to illustrate how double-descent curves can be obtained for the adversarial error. On the other hand, we derive conditions under which the adversarial error can grow to infinity as more features are added, while at the same time, the test error goes to zero. We show this behavior is caused by the fact that the norm of the parameter vector grows with the number of features. It is also established that $\ell_\infty$ and $\ell_2$-adversarial attacks might behave fundamentally differently due to how the $\ell_1$ and $\ell_2$-norms of random projections concentrate. We also show how our reformulation allows for solving adversarial training as a convex optimization problem. This fact is then exploited to establish similarities between adversarial training and parameter-shrinking methods and to study how the training might affect the robustness of the estimated models.
翻译:我们研究了在对抗性攻击面前线性回归错误。 在这个框架中, 对手会改变回归模型的输入, 以尽量扩大预测错误。 我们提供对手在场时预测错误的界限, 作为参数规范的函数, 而没有对手时的错误。 我们用非对抗性设置的分析来显示这些界限如何使研究对抗性错误成为可能。 所获得的结果揭示了对抗性线性模型对对抗性攻击的强度。 添加的功能可能是其他强度或弱度的来源。 一方面, 我们使用负性参数结果来说明对敌对性错误如何获得双光曲线。 另一方面, 我们提出对抗性错误如何通过增加特性来使对抗性错误发展到无限化。 同时, 测试错误会变为零。 我们显示这种行为的原因是, 参数矢量的规律会随着特性的增加而增长。 我们还确定 $\ ellinfy 美元 和 $\ ell_ ell_ lablightal- comprility 的精确性分析方法会如何使我们的对正轨性攻击的精确性研究能够从根本上显示我们对美元 和 美元的精确性研究。 我们的精确性培训和 $_ ral- resentreality_ restial- restial is 的精确性研究会如何化的精确性研究如何化方法如何显示我们如何显示我们如何解化的精确性研究。