Convolutional neural networks (CNNs) have demonstrated rapid progress and a high level of success in object detection. However, recent evidence has highlighted their vulnerability to adversarial attacks. These attacks are calculated image perturbations or adversarial patches that result in object misclassification or detection suppression. Traditional camouflage methods are impractical when applied to disguise aircraft and other large mobile assets from autonomous detection in intelligence, surveillance and reconnaissance technologies and fifth generation missiles. In this paper we present a unique method that produces imperceptible patches capable of camouflaging large military assets from computer vision-enabled technologies. We developed these patches by maximising object detection loss whilst limiting the patch's colour perceptibility. This work also aims to further the understanding of adversarial examples and their effects on object detection algorithms.
翻译:进化神经网络(CNNs)显示,在发现物体方面进展迅速,成功程度很高,然而,最近有证据表明,它们很容易受到对抗性攻击,这些攻击被计算成图像扰动或对抗性补丁,导致物体分类错误或探测抑制;传统的伪装方法在用于伪装飞机和其他大型移动资产时不切实际,无法在情报、监视和侦察技术以及第五代导弹方面进行自动探测;在本文件中,我们提出了一个独特的方法,从计算机视觉辅助技术中产生无法察觉的覆盖大军事资产的漏洞;我们通过最大限度控制物体探测损失,同时限制对物体的颜色可感知性,从而发展这些补丁;这项工作还旨在进一步了解对抗性实例及其对物体探测算法的影响。