Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has been used to generate adversarial examples from the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research (CIFAR-10) data sets. Under perturbation, the method provided absolute accuracy improvements of up to 9.3% in the MNIST data set and 13% in the CIFAR-10 data set. Training using transformed images with higher luminance values increases the robustness of the classifier. We have shown that transfer learning is disadvantageous for adversarial machine learning. The results indicate that simple adversarial examples can improve resilience and make deep learning easier to apply in various applications.
翻译:尽管在网络架构绩效方面取得了巨大进步,但对抗性攻击的易发性使在安全关键应用中实施对抗性攻击的深层次学习具有挑战性,本文件建议采用以数据为中心的方法解决这一问题。采用了非本地分层法,具有不同的发光值,用于生成国家标准和技术研究所(MNIST)和加拿大高级研究所(CIFAR-10)数据集中的对抗性例子。在干扰下,这种方法提供了绝对精确度的改进,在MNIST数据集中达到9.3%,在CIFAR-10数据集中达到13%。使用高亮度转换图像进行的培训提高了分类人的稳健性。我们已经表明,转移学习不利于对抗性机器学习。结果显示,简单的对抗性例子可以提高复原力,使深入学习更容易应用于各种应用。