Deep Learning (DL) has shown great success in many human-related tasks, which has led to its adoption in many computer vision based applications, such as security surveillance system, autonomous vehicles and healthcare. Such safety-critical applications have to draw its path to success deployment once they have the capability to overcome safety-critical challenges. Among these challenges are the defense against or/and the detection of the adversarial example (AE). Adversary can carefully craft small, often imperceptible, noise called perturbations, to be added to the clean image to generate the AE. The aim of AE is to fool the DL model which makes it a potential risk for DL applications. Many test-time evasion attacks and countermeasures, i.e., defense or detection methods, are proposed in the literature. Moreover, few reviews and surveys were published and theoretically showed the taxonomy of the threats and the countermeasure methods with little focus in AE detection methods. In this paper, we attempt to provide a theoretical and experimental review for AE detection methods. A detailed discussion for such methods is provided and experimental results for eight state-of-the-art detectors are presented under different scenarios on four datasets. We also provide potential challenges and future perspectives for this research direction.
翻译:深入学习(DL)在许多与人类有关的任务中取得了巨大成功,这导致它在许多基于计算机愿景的应用中被采纳,如安全监视系统、自主车辆和保健等。这些安全关键应用一旦有能力克服安全关键挑战,就必须走上成功部署的道路。这些挑战包括防守或/和发现对抗性例子(AE)。反向技术可以谨慎地将小的、往往不易察觉的噪音称为扰动,添加到清洁图像中,以产生AE。AE的目的是欺骗DL模型,使其成为DL应用的潜在风险。文献中提出了许多测试性规避攻击和反措施,即防御或探测方法。此外,很少公布和从理论上展示威胁的分类和反制方法,很少关注AE探测方法。在本文中,我们试图为AE探测方法提供理论和实验性审查。为这种方法提供了详细讨论,为8个州级探测器提供了实验结果,并在不同的研究前景下提出了这一未来挑战。我们还在四种不同的研究前景下提出了我们提出的这一研究前景。