Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years. However, the progress of building more robust models is usually hampered by the incomplete or incorrect robustness evaluation. To accelerate the research on reliable evaluation of adversarial robustness of the current defense models in image classification, the TSAIL group at Tsinghua University and the Alibaba Security group organized this competition along with a CVPR 2021 workshop on adversarial machine learning (https://aisecure-workshop.github.io/amlcvpr2021/). The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness more effectively and reliably. The participants were encouraged to develop stronger white-box attack algorithms to find the worst-case robustness of different defenses. This competition was conducted on an adversarial robustness evaluation platform -- ARES (https://github.com/thu-ml/ares), and is held on the TianChi platform (https://tianchi.aliyun.com/competition/entrance/531847/introduction) as one of the series of AI Security Challengers Program. After the competition, we summarized the results and established a new adversarial robustness benchmark at https://ml.cs.tsinghua.edu.cn/ares-bench/, which allows users to upload adversarial attack algorithms and defense models for evaluation.
翻译:由于深层神经网络(DNNS)易受对抗性例子的影响,近年来提出了大量防御技术来缓解这一问题,然而,建立更强有力的模型的进展通常受到不完全或不正确的稳健性评价的阻碍。为了加速对当前防御模型在图像分类方面的对抗性强度进行可靠评价的研究,清华大学TSAIL小组和Alibaba安全小组组织了这次竞争,同时举办了CVPR 2021对抗性机器学习讲习班(https://aisuresecure-workshop.github.io/amlcvpr2021)。这次竞争的目的是激励新的攻击算法,以便更有效和可靠地评价对抗性强健性强性强性。鼓励与会者开发更强大的白箱攻击算法,以找到不同防御的最坏的稳健性。这次竞争是在一个对抗性强性评价平台上进行的 -- ARES (https://github.com/hu-ml/ares) 对抗性机器学习(https://gistriareablibal-waralaction us accreabal devalmental devial destrational deviewal deviality deviality supality pas pas pas pas pasionalmentalmentalmentalmentalmentalmentality)。