Deep Neural Networks (DNNs) are increasingly applied in the real world in safety critical applications like advanced driver assistance systems. An example for such use case is represented by traffic sign recognition systems. At the same time, it is known that current DNNs can be fooled by adversarial attacks, which raises safety concerns if those attacks can be applied under realistic conditions. In this work we apply different black-box attack methods to generate perturbations that are applied in the physical environment and can be used to fool systems under different environmental conditions. To the best of our knowledge we are the first to combine a general framework for physical attacks with different black-box attack methods and study the impact of the different methods on the success rate of the attack under the same setting. We show that reliable physical adversarial attacks can be performed with different methods and that it is also possible to reduce the perceptibility of the resulting perturbations. The findings highlight the need for viable defenses of a DNN even in the black-box case, but at the same time form the basis for securing a DNN with methods like adversarial training which utilizes adversarial attacks to augment the original training data.
翻译:深神经网络(DNNS)越来越多地在现实世界中用于安全关键应用,如先进的驱动器协助系统。这种使用案例的例子有交通标志识别系统。与此同时,众所周知,目前的DNNS可能会被对抗性攻击所蒙骗,如果这些攻击能够在现实条件下实施,这就引起安全问题。在这项工作中,我们采用不同的黑盒攻击方法来制造在物理环境中应用的扰动,并可用于在不同环境条件下愚弄系统。据我们所知,我们首先将有形攻击的一般框架与不同的黑盒攻击方法结合起来,并研究不同方法对同一环境下攻击成功率的影响。我们表明,可靠的人身对抗性攻击可以用不同方法进行,而且有可能降低由此造成的扰动的可感知性。调查结果突出表明,即使在黑箱案件中,DNN也需要可行的防御,但与此同时,我们构成了确保DNNN与诸如使用对抗性攻击的对抗性训练来充实原始训练数据等方法的基础。</s>