Robustness to adversarial perturbations is of paramount concern in modern machine learning. One of the state-of-the-art methods for training robust classifiers is adversarial training, which involves minimizing a supremum-based surrogate risk. The statistical consistency of surrogate risks is well understood in the context of standard machine learning, but not in the adversarial setting. In this paper, we characterize which supremum-based surrogates are consistent for distributions absolutely continuous with respect to Lebesgue measure in binary classification. Furthermore, we obtain quantitative bounds relating adversarial surrogate risks to the adversarial classification risk. Lastly, we discuss implications for the $\cH$-consistency of adversarial training.
翻译:在现代机器学习中,对对抗性扰动的强力是最重要的关切问题。培训强力分类员的最先进方法之一是对抗性培训,这涉及最大限度地减少基于超模的代孕风险。代孕风险在统计上的一致性在标准机器学习中是完全理解的,但在对抗性学习中则不是。在本文中,我们在二进制分类中,对于莱贝斯格措施的绝对连续分配,我们给出了哪些基于超模的代孕者是一致的。此外,我们获得了与对抗性分类风险有关的对抗性代孕风险的定量界限。最后,我们讨论了对抗性培训对美元的一致性的影响。