Recently, Deep Neural Networks (DNNs) have achieved remarkable performances in many applications, while several studies have enhanced their vulnerabilities to malicious attacks. In this paper, we emulate the effects of natural weather conditions to introduce plausible perturbations that mislead the DNNs. By observing the effects of such atmospheric perturbations on the camera lenses, we model the patterns to create different masks that fake the effects of rain, snow, and hail. Even though the perturbations introduced by our attacks are visible, their presence remains unnoticed due to their association with natural events, which can be especially catastrophic for fully-autonomous and unmanned vehicles. We test our proposed fakeWeather attacks on multiple Convolutional Neural Network and Capsule Network models, and report noticeable accuracy drops in the presence of such adversarial perturbations. Our work introduces a new security threat for DNNs, which is especially severe for safety-critical applications and autonomous systems.
翻译:最近,深神经网络(DNNS)在许多应用中取得了显著的成绩,而一些研究则增加了其易受恶意攻击的脆弱性。在本文中,我们模仿自然气候条件的影响,引入了误导DNS的貌似合理的扰动。通过观察这些大气扰动对摄像镜头的影响,我们模拟了这些模式,制造了不同的面具,以假装雨、雪和冰雹的影响。尽管我们的攻击所造成的干扰是显而易见的,但它们的存在仍然无人注意,因为它们与自然事件有关,而自然事件对完全自主和无人驾驶的车辆来说可能特别具有灾难性。我们测试了我们提议的对多个革命神经网络和机库网络模型的假威瑟攻击,并报告了在这种对抗性扰动性扰动中出现的明显精确性下降。我们的工作为DNNS带来了新的安全威胁,这对安全关键应用程序和自主系统来说尤为严重。