Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise, which is concentrated in a particular region of an input image. In this work, we perform an in-depth analysis of different patch generation parameters, including initialization, patch size, and especially positioning a patch in an image during training. We focus on the object vanishing attack and run experiments with YOLOv3 as a model under attack in a white-box setting and use images from the COCO dataset. Our experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength compared to a fixed position. The best results were obtained when a patch was positioned randomly during training, while patch position additionally varied within a batch.
翻译:Aversarial adversarial pater-basic攻击的目的是用故意产生的噪音欺骗神经网络,这种噪音集中在输入图像的特定区域。 在这项工作中,我们深入分析了不同的补丁生成参数,包括初始化、补丁大小,特别是在训练期间将一个补丁定位在图像中。我们侧重于物体消失攻击,并以YOLOv3作为攻击模型在白箱设置中进行实验,并使用COCO数据集的图像。我们的实验显示,在训练期间,在日益扩大的窗口中插入一个补丁,与固定位置相比,攻击强度大幅增加。最好的结果是在训练期间随机定位一个补丁,而补丁位置在一组内部又发生变化。