Deep neural networks are known to be susceptible to adversarial perturbations -- small perturbations that alter the output of the network and exist under strict norm limitations. While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs. Universal perturbations present a more realistic case of adversarial attacks, as awareness of the model's exact input is not required. In addition, the universal attack setting raises the subject of generalization to unseen data, where given a set of inputs, the universal perturbations aim to alter the model's output on out-of-sample data. In this work, we study physical passive patch adversarial attacks on visual odometry-based autonomous navigation systems. A visual odometry system aims to infer the relative camera motion between two corresponding viewpoints, and is frequently used by vision-based autonomous navigation systems to estimate their state. For such navigation systems, a patch adversarial perturbation poses a severe security issue, as it can be used to mislead a system onto some collision course. To the best of our knowledge, we show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene. We provide evaluation on synthetic closed-loop drone navigation data and demonstrate that a comparable vulnerability exists in real data. A reference implementation of the proposed method and the reported experiments is provided at https://github.com/patchadversarialattacks/patchadversarialattacks.
翻译:众所周知,深神经网络容易受到对抗性扰动 -- -- 小型扰动会改变网络产出,并存在于严格的规范限制之下。虽然这种扰动通常被按特定输入量进行讨论,但可以构建一个通用扰动,以改变模型在一组输入量上的输出。通用扰动会显示一个比较现实的对抗性攻击案例,因为不需要了解模型的确切输入量。此外,通用攻击设置会将易变性问题引向看不见数据,如果有一组投入,则普遍扰动的目的是改变模型在隔热数据上的输出。在这项工作中,我们研究视觉的奥多米特自主导航系统进行物理被动的对称性攻击,以改变模型的对立性攻击。我们用视觉自主导航系统经常用来估计其状态。对于这种导航系统来说,贴合式对调的易变形对立性会构成一个严重的安全问题,因为它可以用来将一个系统误导到某些相撞性反射数据流流数据。我们用一个最精确的模型来展示了真实的对立性攻击性攻击。我们用一个最精确的模型来展示了。我们的知识,可以提供一个在精确的对准性攻击的模型。