Adversarial training has been proven to be a powerful regularization method to improve the generalization of models. However, current adversarial training methods only attack the original input sample or the embedding vectors, and their attacks lack coverage and diversity. To further enhance the breadth and depth of attack, we propose a novel masked weight adversarial training method called DropAttack, which enhances generalization of model by adding intentionally worst-case adversarial perturbations to both the input and hidden layers in different dimensions and minimize the adversarial risks generated by each layer. DropAttack is a general technique and can be adopt to a wide variety of neural networks with different architectures. To validate the effectiveness of the proposed method, we used five public datasets in the fields of natural language processing (NLP) and computer vision (CV) for experimental evaluating. We compare the proposed method with other adversarial training methods and regularization methods, and our method achieves state-of-the-art on all datasets. In addition, Dropattack can achieve the same performance when it use only a half training data compared to other standard training method. Theoretical analysis reveals that DropAttack can perform gradient regularization at random on some of the input and wight parameters of the model. Further visualization experiments show that DropAttack can push the minimum risk of the model to a lower and flatter loss landscapes. Our source code is publicly available on https://github.com/nishiwen1214/DropAttack.
翻译:Adversarial培训已被证明是改进模型总体化的有力规范化方法。然而,当前的对抗性培训方法只针对原始输入样本或嵌入矢量,而其攻击则缺乏覆盖面和多样性。为了进一步提高攻击的广度和深度,我们提议了一种新颖的蒙面重量对抗性培训方法,称为“DropAttack”,它通过在输入和隐藏的不同层面增加故意最坏的对抗性干扰,将模型的典型化,并最大限度地减少各层产生的对抗性风险。DroppAttack是一种一般技术,可以适用于具有不同结构的多种神经网络。为了验证拟议方法的有效性,我们在自然语言处理(NLP)和计算机视觉(CV)领域使用五个公共数据集进行实验性评估。我们将拟议的方法与其他对抗性培训方法和规范化方法进行比较,在所有数据集中实现最先进的模型化。此外,如果在使用一些与较低标准的公开培训方法相比的半个培训数据时,DrompAttatt可以达到同样的效果。AroralAxim分析显示,在最低的图像分析中可以进行最低的递解变的变变。