Crowd counting is a regression task that estimates the number of people in a scene image, which plays a vital role in a range of safety-critical applications, such as video surveillance, traffic monitoring and flow control. In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks, a major security threat to deep learning. A backdoor attack implants a backdoor trigger into a target model via data poisoning so as to control the model's predictions at test time. Different from image classification models on which most of existing backdoor attacks have been developed and tested, crowd counting models are regression models that output multi-dimensional density maps, thus requiring different techniques to manipulate. In this paper, we propose two novel Density Manipulation Backdoor Attacks (DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or small density estimations. Experimental results demonstrate the effectiveness of our DMBA attacks on five classic crowd counting models and four types of datasets. We also provide an in-depth analysis of the unique challenges of backdooring crowd counting models and reveal two key elements of effective attacks: 1) full and dense triggers and 2) manipulation of the ground truth counts or density maps. Our work could help evaluate the vulnerability of crowd counting models to potential backdoor attacks.
翻译:人群计数是一项回归任务,它估计了在现场图像中的人数,这在一系列安全关键应用中发挥着关键作用,例如视频监视、交通监测和流量控制。在本文中,我们研究了深学习的人群计数模型对幕后袭击的脆弱性,这是对深层学习的一大安全威胁。后门袭击通过数据中毒将后门触发器植入目标模型,以控制模型在测试时的预测。与大多数现有后门袭击已经开发和测试的图像分类模型不同,人群计数模型是生成多维密度地图的回归模型,因此需要不同的技术操作。在本文件中,我们提出两种新型的“密度操纵后门袭击”模型(DMBA$+$++$和DMBA$$+$)来攻击该模型,以产生任意的大或小密度的密度估计。实验结果表明我们DMBA对五个经典人群计模型和四类数据集的打击是有效的。我们还深入分析了后门计数模型的独特挑战,并揭示了两种有效袭击的关键要素:1)和潜在触发点数,2是地面测算。