As deep learning applications, especially programs of computer vision, are increasingly deployed in our lives, we have to think more urgently about the security of these applications.One effective way to improve the security of deep learning models is to perform adversarial training, which allows the model to be compatible with samples that are deliberately created for use in attacking the model.Based on this, we propose a simple architecture to build a model with a certain degree of robustness, which improves the robustness of the trained network by adding an adversarial sample detection network for cooperative training. At the same time, we design a new data sampling strategy that incorporates multiple existing attacks, allowing the model to adapt to many different adversarial attacks with a single training.We conducted some experiments to test the effectiveness of this design based on Cifar10 dataset, and the results indicate that it has some degree of positive effect on the robustness of the model.Our code could be found at https://github.com/dowdyboy/simple_structure_for_robust_model .
翻译:随着深层学习应用,特别是计算机视觉程序,在我们生活中越来越多地被运用,我们必须更紧迫地思考这些应用的安全性。 改善深层学习模式安全性的有效方法之一是进行对抗性培训,使模型与为攻击模型而刻意制作的样本相容。 在此基础上,我们提出一个简单的架构,以构建一个具有某种强健度的模型,通过为合作培训添加一个对抗性抽样检测网络,提高经过培训的网络的稳健性。 同时,我们设计了一个新的数据抽样战略,其中纳入了多种现有的攻击,使模型能够以单一的培训适应许多不同的对抗性攻击。 我们进行了一些实验,以根据Cifar10数据集测试这一设计的有效性,结果显示它对模型的稳健性具有一定程度的积极影响。 我们的代码可以在https://github.com/dowdyboy/sfor_for_robust_model查阅 https://gathu.com/dowdyboyboy/sfor_for_robust_mod_mode。