Deep neural network models are used today in various applications of artificial intelligence, the strengthening of which, in the face of adversarial attacks is of particular importance. An appropriate solution to adversarial attacks is adversarial training, which reaches a trade-off between robustness and generalization. This paper introduces a novel framework (Layer Sustainability Analysis (LSA)) for the analysis of layer vulnerability in a given neural network in the scenario of adversarial attacks. LSA can be a helpful toolkit to assess deep neural networks and to extend the adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis. The LSA framework identifies a list of Most Vulnerable Layers (MVL list) of a given network. The relative error, as a comparison measure, is used to evaluate representation sustainability of each layer against adversarial attack inputs. The proposed approach for obtaining robust neural networks to fend off adversarial attacks is based on a layer-wise regularization (LR) over LSA proposal(s) for adversarial training (AT); i.e. the AT-LR procedure. AT-LR could be used with any benchmark adversarial attack to reduce the vulnerability of network layers and to improve conventional adversarial training approaches. The proposed idea performs well theoretically and experimentally for state-of-the-art multilayer perceptron and convolutional neural network architectures. Compared with the AT-LR and its corresponding base adversarial training, the classification accuracy of more significant perturbations increased by 16.35%, 21.79%, and 10.730% on Moon, MNIST, and CIFAR-10 benchmark datasets in comparison with the AT-LR and its corresponding base adversarial training, respectively. The LSA framework is available and published at https://github.com/khalooei/LSA.
翻译:今天,在各种人工智能应用中使用了深神经网络模型,这种模型的加强在对抗性攻击面前特别重要。对抗性攻击的适当解决办法是对抗性训练,这种训练在稳健性和概括性之间达到权衡取舍。本文件介绍了一个新的框架(Layer可持续性分析(LSA)),用于分析在对抗性攻击情况下特定神经网络中的层脆弱性。LSA可以是一个有用的工具包,用来评估深神经网络,并扩大对抗性训练方法,通过层监测和分析提高模型层的可持续性。LSA框架确定了一个特定网络中最易受到伤害层(MVL名单)的适当解决办法。相对错误作为一种比较措施,用来评价每一层在对抗性攻击性投入方面的代表性。为抵御对抗性攻击而建立强神经网络的拟议方法是从层次上规范(LR)超过LSA关于对抗性训练的建议(AT);即AT-LR(AT-LR)分类程序。AT-LR可以用来使用任何基准的对抗性攻击来降低网络中最易受伤害性层(MVLLL)的精确性,在IMLAB数据库和多面性训练中,在10级数据库中,在数据库中采用较精确的模型上进行。