Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs). However, compared to the large body of research in optimizing the adversarial training process, there are few investigations into how architecture components affect robustness, and they rarely constrain model capacity. Thus, it is unclear where robustness precisely comes from. In this work, we present the first large-scale systematic study on the robustness of DNN architecture components under fixed parameter budgets. Through our investigation, we distill 18 actionable robust network design guidelines that empower model developers to gain deep insights. We demonstrate these guidelines' effectiveness by introducing the novel Robust Architecture (RobArch) model that instantiates the guidelines to build a family of top-performing models across parameter capacities against strong adversarial attacks. RobArch achieves the new state-of-the-art AutoAttack accuracy on the RobustBench ImageNet leaderboard. The code is available at $\href{https://github.com/ShengYun-Peng/RobArch}{\text{this url}}$.
翻译:Aversarial 培训是提高深神经网络(DNNS)稳健性的最有效方法。然而,与优化对抗性培训过程的大量研究相比,很少有人调查建筑组成部分如何影响稳健性,很少限制模型能力。因此,还不清楚强健性究竟来自何处。在这项工作中,我们首次对固定参数预算下DNN结构组成部分的稳健性进行了大规模系统研究。通过我们的调查,我们总结了18项可操作的强健网络设计准则,使模型开发者能够深入了解情况。我们通过引入新型的Robust 建筑(RobArch)模型模型(RobArch)模型(RobArch)模型(RobArch)模型集集集集集于各种参数能力,在RobustBechnch 图像网络领导板上实现了新的最新自动定位精度。该代码可在 $\href{https://github.com/Sheng-Peng/RobArchun/thun text{thurl_}上查阅。