Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
翻译:深入学习(DL)是当代计算机愿景领域最广泛使用的工具,其准确解决复杂问题的能力用于视觉研究,以学习包括安全关键应用程序在内的各种任务的深神经模型,然而,现在人们知道,DL很容易受到对抗性攻击,这种攻击可以通过在图像和视频中引入可见的无法察觉的扰动来操纵其预测。自2013年发现这一现象以来,它吸引了来自多个机器情报子领域的研究人员的极大关注。在[2]中,我们审查了计算机视觉界在2018年之前对深知识(及其防御)的对抗性攻击中所做的贡献。其中许多贡献启发了这一领域的新方向,自第一代方法以来,这些新方向已大大成熟。因此,作为[2]的遗留成果,本文献审查侧重于2018年以来该领域的进展。为了确保真实性,我们主要考虑在计算机愿景和机器学习研究的著名来源中发表的经同行评审的贡献。除了全面的文献审查外,文章还提供了技术术语的简明定义,供今后非专家领域审查。