Nowadays, we are more and more reliant on Deep Learning (DL) models and thus it is essential to safeguard the security of these systems. This paper explores the security issues in Deep Learning and analyses, through the use of experiments, the way forward to build more resilient models. Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks. The results show improvements and new ideas that can be used as recommendations for researchers and practitioners to create increasingly better DL algorithms.
翻译:目前,我们越来越依赖深层学习模式,因此,必须保障这些系统的安全,本文件探讨深层学习中的安全问题,并通过实验分析建立更具复原力模型的前进道路;进行实验,以确定改进DL模式抵御对抗性攻击的稳健性的新办法的长处和短处;结果显示改进和新想法,可供研究人员和从业者用作建议,以建立日益完善的DL算法。