Machine learning is increasingly critical for analysis of the ever-growing corpora of overhead imagery. Advanced computer vision object detection techniques have demonstrated great success in identifying objects of interest such as ships, automobiles, and aircraft from satellite and drone imagery. Yet relying on computer vision opens up significant vulnerabilities, namely, the susceptibility of object detection algorithms to adversarial attacks. In this paper we explore the efficacy and drawbacks of adversarial camouflage in an overhead imagery context. While a number of recent papers have demonstrated the ability to reliably fool deep learning classifiers and object detectors with adversarial patches, most of this work has been performed on relatively uniform datasets and only a single class of objects. In this work we utilize the VisDrone dataset, which has a large range of perspectives and object sizes. We explore four different object classes: bus, car, truck, van. We build a library of 24 adversarial patches to disguise these objects, and introduce a patch translucency variable to our patches. The translucency (or alpha value) of the patches is highly correlated to their efficacy. Further, we show that while adversarial patches may fool object detectors, the presence of such patches is often easily uncovered, with patches on average 24% more detectable than the objects the patches were meant to hide. This raises the question of whether such patches truly constitute camouflage. Source code is available at https://github.com/IQTLabs/camolo.
翻译:高级计算机视觉天体探测技术在识别船舶、汽车、飞机等相关对象方面表现出巨大成功,而依赖计算机视像却打开了巨大的弱点,即物体探测算法对对抗性攻击的易感性。在本文中,我们探讨了在高空图像背景下对抗性伪装的功效和缺点。虽然最近的一些论文表明能够可靠地愚弄深深层学习分类器和带有对抗性补丁的物体探测器,但大多数这项工作都是在相对统一的数据集中进行的,而且只有一类物体。在这项工作中,我们利用VisDrone数据集,该数据集具有很广的视角和对象大小。我们探索了四个不同的对象类别:公共汽车、汽车、卡车、面包。我们建立了一个24个对抗性伪装库,以掩盖这些对象,并给我们的补丁引入了一个补丁半变异体变量。这些补丁的透明性(或字母值)与其功效密切相关。此外,我们还表明,在这种对抗性基质的基质补丁(或字母)数据集中,这种可轻易地标定的补丁(reabal)会显示,这种普通的补丁(reab)是普通的补丁(reabal)在可查点上的标定器上,这种隐定点的标定点是比普通的。