In the past decade, deep learning has dramatically changed the traditional hand-craft feature manner with strong feature learning capability, resulting in tremendous improvement of conventional tasks. However, deep neural networks have recently been demonstrated vulnerable to adversarial examples, a kind of malicious samples crafted by small elaborately designed noise, which mislead the DNNs to make the wrong decisions while remaining imperceptible to humans. Adversarial examples can be divided into digital adversarial attacks and physical adversarial attacks. The digital adversarial attack is mostly performed in lab environments, focusing on improving the performance of adversarial attack algorithms. In contrast, the physical adversarial attack focus on attacking the physical world deployed DNN systems, which is a more challenging task due to the complex physical environment (i.e., brightness, occlusion, and so on). Although the discrepancy between digital adversarial and physical adversarial examples is small, the physical adversarial examples have a specific design to overcome the effect of the complex physical environment. In this paper, we review the development of physical adversarial attacks in DNN-based computer vision tasks, including image recognition tasks, object detection tasks, and semantic segmentation. For the sake of completeness of the algorithm evolution, we will briefly introduce the works that do not involve the physical adversarial attack. We first present a categorization scheme to summarize the current physical adversarial attacks. Then discuss the advantages and disadvantages of the existing physical adversarial attacks and focus on the technique used to maintain the adversarial when applied into physical environment. Finally, we point out the issues of the current physical adversarial attacks to be solved and provide promising research directions.
翻译:在过去十年中,深层次的学习极大地改变了传统的手工艺特征,具有很强的特征学习能力,从而极大地改进了常规任务;然而,深层神经网络最近被证明很容易受到对抗性例子的伤害,这种恶性样品是由精心设计的微小噪音制作的,这种噪音误导了DNN作出错误的决定,同时仍然对人类不易察觉。反向例子可以分为数字对抗性攻击和人身对抗性攻击。数字对抗性攻击主要在实验室环境中进行,重点是改善对抗性攻击算法的性能。相比之下,有形对抗性攻击的重点是攻击物理世界,部署DNNN系统是一个更具挑战性的任务。由于复杂的物理环境(即亮度、隐蔽性等等),这使得DNN的物理攻击无法做出正确的决定。虽然数字对抗性攻击和人身对抗性攻击之间的差别很小,但实际对抗性攻击的例子具有具体的设计,以克服复杂的物理环境的影响。在本文件中,我们审查DNNN的对立性攻击的实际对抗性攻击的发展情况,包括图像识别任务、目标探测任务,以及目前对立性攻击的不利性研究。我们目前对立性攻击的精确性研究将用来解释。