In recent years, person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks. In this paper, we focus on the backdoor attack on deep ReID models. Existing backdoor attack methods follow an all-to-one/all attack scenario, where all the target classes in the test set have already been seen in the training set. However, ReID is a much more complex fine-grained open-set recognition problem, where the identities in the test set are not contained in the training set. Thus, previous backdoor attack methods for classification are not applicable for ReID. To ameliorate this issue, we propose a novel backdoor attack on deep ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA). Instead of learning fixed triggers for the target classes from the training set, DT-IBA can dynamically generate new triggers for any unknown identities. Specifically, an identity hashing network is proposed to first extract target identity information from a reference image, which is then injected into the benign images by image steganography. We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
翻译:近年来,个人再识别(ReID)在广泛的现实应用中迅速发展,但也构成对抗性攻击的重大风险。在本文中,我们侧重于深ReID模型的后门攻击。现有的后门攻击方法采用了全对一/全攻击情景,测试集中的所有目标类都已在培训集中看到。然而,ReID是一个复杂得多的细微区别的开放识别问题,测试集中的身份没有包含在培训集中。因此,以前的后门攻击分类方法不适用于ReID。为了改善这一问题,我们提议在新的全为人所知的情景下对深ReID进行新颖的后门攻击,称为动态的Triggers隐形后门攻击(DT-IBA)。除了从培训集中为目标类学习固定的触发因素之外,DT-IBA还可以动态地为任何未知身份生成新的触发因素。具体地说,一个身份已建立网络,以便首先从参考图像中提取目标身份信息,然后用图像将它注入可靠的防御性图像中。我们通过一些基准对攻击性数据进行了广泛的验证。