Deep neural network models are used today in various applications of artificial intelligence, the strengthening of which, in the face of adversarial attacks is of particular importance. An appropriate solution to adversarial attacks is adversarial training, which reaches a trade-off between robustness and generalization. This paper introduces a novel framework (Layer Sustainability Analysis (LSA)) for the analysis of layer vulnerability in a given neural network in the scenario of adversarial attacks. LSA can be a helpful toolkit to assess deep neural networks and to extend adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis. The LSA framework identifies a list of Most Vulnerable Layers (MVL list) of a given network. The relative error, as a comparison measure, is used to evaluate the representation sustainability of each layer against adversarial attack inputs. The proposed approach for obtaining robust neural networks to fend off adversarial attacks is based on a layer-wise regularization (LR) over LSA proposal(s) for adversarial training (AT); i.e. the AT-LR procedure. AT-LR could be used with any benchmark adversarial attack to reduce the vulnerability of network layers and to improve conventional adversarial training approaches. The proposed idea performs well theoretically and experimentally for state-of-the-art multilayer perceptron and convolutional neural network architectures. Compared with the AT-LR and its corresponding base adversarial training, the classification accuracy of more significant perturbations increased by 16.35%, 21.79%, and 10.730% on Moon, MNIST, and CIFAR-10 benchmark datasets in comparison with the AT-LR and its corresponding base adversarial training, respectively. The LSA framework is available and published at https://github.com/khalooei/LSA.
翻译:今天,在各种人工智能应用中使用了深神经网络模型,这种模型的加强在对抗性攻击面前特别重要。对抗性攻击的适当解决办法是对抗性训练,这种训练在稳健性和概括性之间达到权衡取舍。本文件介绍了一个新的框架(Layer可持续性分析(LSA)),用于在对抗性攻击的情况下分析特定神经网络中的层脆弱性。LSA可以是一个有用的工具包,用来评估深神经网络,并通过层间监测和分析扩大对抗性训练方法,以提高模型层的可持续性。LSA框架确定了一个特定网络中最脆弱层(MVL清单)的对抗性训练。相对错误作为一种比较措施,用来评估每一层在对抗性攻击性投入方面的代表性。在对抗性攻击情况下,为获得稳健的神经网络以对抗性规范(LRR)超过LSA的建议(AT);即AT-LR(AT-LR)的分类程序。AT-LR可以用来使用任何基准的对抗性攻击来降低网络中最弱层(MVL)的弱点,30 相对性阵列(MR) 其基础性网络和常规性训练中,为10-ILADA(LA)的模型进行更好的实验性训练。