Adversarial training has been proven to be a powerful regularization method to improve the generalization of models. However, current adversarial training methods only attack the original input sample or the embedding vectors, and their attacks lack coverage and diversity. To further enhance the breadth and depth of attack, we propose a novel masked weight adversarial training method called DropAttack, which enhances generalization of model by adding intentionally worst-case adversarial perturbations to both the input and hidden layers in different dimensions and minimize the adversarial risks generated by each layer. DropAttack is a general technique and can be adopt to a wide variety of neural networks with different architectures. To validate the effectiveness of the proposed method, we used five public datasets in the fields of natural language processing (NLP) and computer vision (CV) for experimental evaluating. We compare the proposed method with other adversarial training methods and regularization methods, and our method achieves state-of-the-art on all datasets. In addition, Dropattack can achieve the same performance when it use only a half training data compared to other standard training method. Theoretical analysis reveals that DropAttack can perform gradient regularization at random on some of the input and wight parameters of the model. Further visualization experiments show that DropAttack can push the minimum risk of the model to a lower and flatter loss landscapes. Our source code is publicly available on https://github.com/nishiwen1214/DropAttack.


翻译:Adversarial培训已被证明是改进模型总体化的有力规范化方法。然而,当前的对抗性培训方法只针对原始输入样本或嵌入矢量,而其攻击则缺乏覆盖面和多样性。为了进一步提高攻击的广度和深度,我们提议了一种新颖的蒙面重量对抗性培训方法,称为“DropAttack”,它通过在输入和隐藏的不同层面增加故意最坏的对抗性干扰,将模型的典型化,并最大限度地减少各层产生的对抗性风险。DroppAttack是一种一般技术,可以适用于具有不同结构的多种神经网络。为了验证拟议方法的有效性,我们在自然语言处理(NLP)和计算机视觉(CV)领域使用五个公共数据集进行实验性评估。我们将拟议的方法与其他对抗性培训方法和规范化方法进行比较,在所有数据集中实现最先进的模型化。此外,如果在使用一些与较低标准的公开培训方法相比的半个培训数据时,DrompAttatt可以达到同样的效果。AroralAxim分析显示,在最低的图像分析中可以进行最低的递解变的变变。

0
下载
关闭预览

相关内容

专知会员服务
32+阅读 · 2021年9月16日
【Google】平滑对抗训练,Smooth Adversarial Training
专知会员服务
48+阅读 · 2020年7月4日
《DeepGCNs: Making GCNs Go as Deep as CNNs》
专知会员服务
30+阅读 · 2019年10月17日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
52+阅读 · 2019年9月29日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
【SIGIR2018】五篇对抗训练文章
专知
12+阅读 · 2018年7月9日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
Highway Networks For Sentence Classification
哈工大SCIR
4+阅读 · 2017年9月30日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Feature Denoising for Improving Adversarial Robustness
Arxiv
15+阅读 · 2018年12月9日
Arxiv
10+阅读 · 2018年3月23日
VIP会员
Top
微信扫码咨询专知VIP会员