Intelligent Internet of Things (IoT) systems based on deep neural networks (DNNs) have been widely deployed in the real world. However, DNNs are found to be vulnerable to adversarial examples, which raises people's concerns about intelligent IoT systems' reliability and security. Testing and evaluating the robustness of IoT systems becomes necessary and essential. Recently various attacks and strategies have been proposed, but the efficiency problem remains unsolved properly. Existing methods are either computationally extensive or time-consuming, which is not applicable in practice. In this paper, we propose a novel framework called Attack-Inspired GAN (AI-GAN) to generate adversarial examples conditionally. Once trained, it can generate adversarial perturbations efficiently given input images and target classes. We apply AI-GAN on different datasets in white-box settings, black-box settings and targeted models protected by state-of-the-art defenses. Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly. Moreover, for the first time, AI-GAN successfully scales to complex datasets e.g. CIFAR-100 and ImageNet, with about $90\%$ success rates among all classes.
翻译:基于深神经网络(DNN)的智能物联网系统(IOT)在现实世界中被广泛使用。然而,DNNS被发现容易成为对抗性例子,这引起了人们对智能IOT系统的可靠性和安全性的关切。测试和评价IOT系统的稳健性变得必要和必要。最近提出了各种攻击和战略,但效率问题仍未妥善解决。现有的方法要么在计算上广泛,要么耗时,这在实践上并不适用。在本文中,我们提议了一个名为“攻击激励GAN(AI-GAN)”的新框架,以有条件地生成对抗性例子。一旦经过培训,它就可以根据输入图像和目标分类,有效地产生对抗性干扰。我们把AI-GAN应用到白箱环境中的不同数据集、黑箱设置和受国家技术防御保护的目标模型上。通过广泛的实验,AI-GAN实现了高攻击成功率,超过了现有方法,并且大大缩短了一代的时间。此外,第一次,AI-GAN 成功地将所有数字级的AI-GAN-AN 成功率和各种复杂数据级。