As research in deep neural networks advances, deep convolutional networks become promising for autonomous driving tasks. In particular, there is an emerging trend of employing end-to-end neural network models for autonomous driving. However, previous research has shown that deep neural network classifiers are vulnerable to adversarial attacks. While for regression tasks, the effect of adversarial attacks is not as well understood. In this research, we devise two white-box targeted attacks against end-to-end autonomous driving models. Our attacks manipulate the behavior of the autonomous driving system by perturbing the input image. In an average of 800 attacks with the same attack strength (epsilon=1), the image-specific and image-agnostic attack deviates the steering angle from the original output by 0.478 and 0.111, respectively, which is much stronger than random noises that only perturbs the steering angle by 0.002 (The steering angle ranges from [-1, 1]). Both attacks can be initiated in real-time on CPUs without employing GPUs. Demo video: https://youtu.be/I0i8uN2oOP0.
翻译:随着深度神经网络研究的不断深入,深度卷积网络对于自动驾驶任务变得越来越有前途。特别地,越来越多的研究开始使用端到端的神经网络模型进行自动驾驶。然而,过去的研究表明,深度神经网络分类器容易受到对抗性攻击。对于回归任务来说,对抗性攻击的影响尚不是很明确。在本研究中,我们设计了两种白盒定向攻击,针对端到端自动驾驶模型。我们的攻击通过扰动输入图像来操纵自动驾驶系统的行为。在相同的攻击强度(epsilon=1)下,平均进行800次的图像特异性和图像不可知攻击将转向角度从原始输出偏离0.478和0.111,比只扰动转向角度0.002的随机噪声强得多(转向角度范围为[-1,1])。这两种攻击都可在CPU上实时启动,无需使用GPU。演示视频:https://youtu.be/I0i8uN2oOP0。