Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems. Especially, deep neural network (DNN) methods have significantly reduced estimation errors for crowd counting missions. Recent studies have demonstrated that DNNs are vulnerable to adversarial attacks, i.e., normal images with human-imperceptible perturbations could mislead DNNs to make false predictions. In this work, we propose a robust attack strategy called Adversarial Patch Attack with Momentum (APAM) to systematically evaluate the robustness of crowd counting models, where the attacker's goal is to create an adversarial perturbation that severely degrades their performances, thus leading to public safety accidents (e.g., stampede accidents). Especially, the proposed attack leverages the extreme-density background information of input images to generate robust adversarial patches via a series of transformations (e.g., interpolation, rotation, etc.). We observe that by perturbing less than 6\% of image pixels, our attacks severely degrade the performance of crowd counting systems, both digitally and physically. To better enhance the adversarial robustness of crowd counting models, we propose the first regression model-based Randomized Ablation (RA), which is more sufficient than Adversarial Training (ADT) (Mean Absolute Error of RA is 5 lower than ADT on clean samples and 30 lower than ADT on adversarial examples). Extensive experiments on five crowd counting models demonstrate the effectiveness and generality of the proposed method. Code is available at \url{https://github.com/harrywuhust2022/Adv-Crowd-analysis}.
翻译:由于在安全临界监视系统中的重要性,计票已经引起了人们的极大关注。特别是,深神经网络(DNN)方法大大降低了人群计票任务的估计误差。最近的研究显示,DNN人容易受到对抗性攻击,例如,正常图像带有人类无法察觉的扰动,可能会误导DNN人作出虚假的预测。在这项工作中,我们提议了一个称为Aversarial Patch Attack with Momentum(APAM)的强有力的攻击战略,以系统地评价人群计票模型的稳健性能。在这个模型中,攻击者的目标是制造一种对抗性穿刺性穿透性能,严重地降低他们的性能,从而导致公共安全事故(例如,印记性事故 ) 。 特别是,拟议的攻击利用输入图像的极端密度背景信息来通过一系列变异形(例如,内插、旋转等)产生强大的对抗性补补补补。我们发现,在图像计数模型中,比基于图像的平流值值低6 ⁇ 低,我们的攻击严重地降低了人群计数系统的表现,从数字和物理角度来看,从而导致公共安全事故事故事故事故发生事故事故事故事故事故(例如,而更严重地显示,ADDDBA的精确分析方法更可靠,这是更可靠,我们更精确地算方法更可靠。