Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance, especially considering the highly strict requirement of public safety. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks, which can effectively generate adversarial examples for re-ID. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Moreover, by benchmarking various adversarial settings, we expect that our work can facilitate the development of robust feature learning with the experimental conclusions we have drawn.
翻译:由于在视频监视中采用商业再识别系统(Re-ID)十分重要,最近引起人们的注意。一般而言,在各种外观变化下,用于识别两种人图像的远程测量值预计会强劲有力。然而,我们的工作发现,现有的远程测量值极易受到对抗性实例的伤害,因为只是简单地将人无法感知的干扰添加到个人图像中。因此,在视频监视中采用商业再识别系统时,安全危险急剧增加,特别是考虑到对公共安全的高度严格要求。虽然在分类分析中广泛采用对抗性实例,但在个人再识别等计量分析中却很少研究这种实例。最可能的原因是培训和测试再识别网络之间的自然差距,即,在测试期间,在没有有效计量的情况下,再识别网络的预测无法直接使用。在这项工作中,我们提出对抗性攻击,作为对抗性分类攻击的一种平行方法,可以有效地产生对重新识别的对抗性实例。全面实验清楚地揭示了重新识别系统的对抗性效应。此外,通过对各种对抗性识别设置进行基准,我们期望我们的工作能够以强有力的实验性特征促进我们的研究。