When a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real-world applications. While they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. Unlike classification, semantic segmentation aims to classify every pixel within a given image. In this work, we explore backdoor attacks on segmentation models to misclassify all pixels of a victim class by injecting a specific trigger on non-victim pixels during inferences, which is dubbed Influencer Backdoor Attack (IBA). IBA is expected to maintain the classification accuracy of non-victim pixels and misleads classifications of all victim pixels in every single inference. Specifically, we consider two types of IBA scenarios, i.e., 1) Free-position IBA: the trigger can be positioned freely except for pixels of the victim class, and 2) Long-distance IBA: the trigger can only be positioned somewhere far from victim pixels, given the possible practical constraint. Based on the context aggregation ability of segmentation models, we propose techniques to improve IBA for the scenarios. Concretely, for free-position IBA, we propose a simple, yet effective Nearest Neighbor trigger injection strategy for poisoned sample creation. For long-distance IBA, we propose a novel Pixel Random Labeling strategy. Our extensive experiments reveal that current segmentation models do suffer from backdoor attacks, and verify that our proposed techniques can further increase attack performance.
翻译:当向深度神经网络的训练数据集中注入少量毒化样本时,网络可能在推断期间表现出恶意行为,这对实际应用构成潜在威胁。虽然这已经在分类中得到了广泛研究,但是在语义分割上的后门攻击却鲜为人知。与分类不同,语义分割旨在分类给定图像内的每个像素。在本文中,我们探讨了对分割模型的后门攻击,通过在非受害者像素上注入特定触发器,使其在推断期间误分类所有受害者像素,这被称为影响因素后门攻击(IBA)。IBA 期望在每次推理中保持非受害者像素的分类准确性,并误导对所有受害者像素的分类。具体来说,我们考虑两种 IBA 场景,即 1) 自由位置 IBA: 触发器可以自由定位,除受害者类像素外,;和 2) 远距离 IBA: 触发器只能定位在离受害者像素远的某处,给出可能的实际限制。基于分割模型的上下文聚合能力,我们提出了技术来提高 IBA 的攻击性能。具体而言,对于自由位置 IBA,我们提出了一种简单但有效的最近邻触发器注入策略用于毒化样本的生成。对于远距离 IBA,我们提出了一种新颖的像素随机标注策略。我们广泛的实验揭示了当前分割模型确实受到后门攻击的影响,并验证了我们提出的技术可以进一步提高攻击性能。