The neural network (NN) becomes one of the most heated type of models in various signal processing applications. However, NNs are extremely vulnerable to adversarial examples (AEs). To defend AEs, adversarial training (AT) is believed to be the most effective method while due to the intensive computation, AT is limited to be applied in most applications. In this paper, to resolve the problem, we design a generic and efficient AT improvement scheme, namely case-aware adversarial training (CAT). Specifically, the intuition stems from the fact that a very limited part of informative samples can contribute to most of model performance. Alternatively, if only the most informative AEs are used in AT, we can lower the computation complexity of AT significantly as maintaining the defense effect. To achieve this, CAT achieves two breakthroughs. First, a method to estimate the information degree of adversarial examples is proposed for AE filtering. Second, to further enrich the information that the NN can obtain from AEs, CAT involves a weight estimation and class-level balancing based sampling strategy to increase the diversity of AT at each iteration. Extensive experiments show that CAT is faster than vanilla AT by up to 3x while achieving competitive defense effect.
翻译:神经网络(NN)成为各种信号处理应用程序中最热的模型之一。然而,神经网络(NN)成为各种信号处理应用中最热的模型之一。但是,NN是极易受到对抗性实例的极端脆弱。为了保护AEs,据认为对抗性培训(AT)是最有效的方法,而由于密集的计算,AT在大多数应用中只能应用。在这个文件中,为了解决问题,我们设计了一个通用的和高效的AT改进计划,即案件认知性对抗性对抗性培训(CAT)。具体地说,直觉来自这样一个事实,即信息样本中只有非常有限的部分能够促进大多数模型性能。或者,如果仅使用信息性能最丰富的AEs,我们就可以大大降低AT的计算复杂性,以保持防御效应。为了达到这一点,CAT取得了两个突破。首先,为AE过滤提出了一种估计对抗性例子信息程度的方法。第二,为了进一步丰富NE从AEs获得的信息,CAT涉及一个重量估计和基于等级的平衡性平衡性取样战略,以增加每次AT的多样化。广泛的实验表明,CAT比VX取得竞争性防御效果的速度要快于VAX。