This paper aims to generate realistic attack samples of person re-identification, ReID, by reading the enemy's mind (VM). In this paper, we propose a novel inconspicuous and controllable ReID attack baseline, LCYE, to generate adversarial query images. Concretely, LCYE first distills VM's knowledge via teacher-student memory mimicking in the proxy task. Then this knowledge prior acts as an explicit cipher conveying what is essential and realistic, believed by VM, for accurate adversarial misleading. Besides, benefiting from the multiple opposing task framework of LCYE, we further investigate the interpretability and generalization of ReID models from the view of the adversarial attack, including cross-domain adaption, cross-model consensus, and online learning process. Extensive experiments on four ReID benchmarks show that our method outperforms other state-of-the-art attackers with a large margin in white-box, black-box, and target attacks. Our code is now available at https://gitfront.io/r/user-3704489/mKXusqDT4ffr/LCYE/.
翻译:本文的目的是通过阅读敌人的心智(VM),产生真实的重识别人(ReID)攻击样本。在本文中,我们提出一个新的新颖的不明显和可控的雷ID攻击基线(LCYE),以生成对抗性查询图像。具体地说,LCYE首先通过教师-学生记忆模拟代理任务来蒸馏VM的知识。然后,这一知识先作为明确密码,传递VM认为重要和现实的东西,以便准确误导对抗性攻击。此外,我们从LCYE的多重对立任务框架中受益,我们进一步调查从对抗性攻击的角度出发的雷ID模型的可解释性和通用性,包括跨面调整、跨模范共识和在线学习过程。关于四种雷ID基准的广泛实验显示,我们的方法在白色框、黑箱和目标攻击中大大超越了其他州-艺术攻击者。我们的代码现在可在https://gitfront.io/r/user-370489/mKXusQr4DRff4D.