Target detection systems identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. Deep Convolutional Neural Network (DCNN) classifiers have proven to be successful tools for computer vision applications. However,prior research confirms that even state of the art classifier models are susceptible to adversarial attacks. In this paper, we show how to generate adversarial infrared images by adding small perturbations to the targets region to deceive a DCNN-based target detector at remarkable levels. We demonstrate significant progress in developing visually imperceptible adversarial infrared images where the targets are visually recognizable by an expert but a DCNN-based target detector cannot detect the targets in the image.
翻译:目标检测系统通过定位其输入图像的坐标来识别目标。 理想的做法是将图像中的每个像素标为背景或潜在目标像素。 深演神经网络分类器已证明是计算机视觉应用的成功工具。 然而, 原始研究证实, 即使是艺术分类模型的状态也很容易受到对抗性攻击。 在本文中, 我们展示了如何生成对称红外线图像的方法, 向目标区域添加小扰动, 以在显著水平上欺骗以DCNN为基地的目标探测器 。 我们展示了在开发可见可见的对立红外图像方面取得的显著进展, 目标为专家目视识别, 但以DCNN为基地的目标探测器无法探测图像中的目标 。