In this paper, we propose a defence strategy to improve adversarial robustness by incorporating hidden layer representation. The key of this defence strategy aims to compress or filter input information including adversarial perturbation. And this defence strategy can be regarded as an activation function which can be applied to any kind of neural network. We also prove theoretically the effectiveness of this defense strategy under certain conditions. Besides, incorporating hidden layer representation we propose three types of adversarial attacks to generate three types of adversarial examples, respectively. The experiments show that our defence method can significantly improve the adversarial robustness of deep neural networks which achieves the state-of-the-art performance even though we do not adopt adversarial training.
翻译:在本文中,我们提出了一个国防战略,通过纳入隐性层代表来提高对抗性强度,这一防御战略的关键在于压缩或过滤输入信息,包括对抗性扰动。这一防御战略可被视为可适用于任何类型的神经网络的激活功能。我们还从理论上证明了在特定条件下这一防御战略的有效性。此外,我们提出了三种类型的对抗性攻击,以分别产生三种对抗性例子。实验表明,我们的防御方法可以大大改善深层神经网络的对抗性强度,这些网络即使我们不采用对抗性训练,也能达到最先进的性能。