Deep hashing has become a popular method in large-scale image retrieval due to its computational and storage efficiency. However, recent works raise the security concerns of deep hashing. Although existing works focus on the vulnerability of deep hashing in terms of adversarial perturbations, we identify a more pressing threat, backdoor attack, when the attacker has access to the training data. A backdoored deep hashing model behaves normally on original query images, while returning the images with the target label when the trigger presents, which makes the attack hard to be detected. In this paper, we uncover this security concern by utilizing clean-label data poisoning. To the best of our knowledge, this is the first attempt at the backdoor attack against deep hashing models. To craft the poisoned images, we first generate the targeted adversarial patch as the backdoor trigger. Furthermore, we propose the confusing perturbations to disturb the hashing code learning, such that the hashing model can learn more about the trigger. The confusing perturbations are imperceptible and generated by dispersing the images with the target label in the Hamming space. We have conducted extensive experiments to verify the efficacy of our backdoor attack under various settings. For instance, it can achieve 63% targeted mean average precision on ImageNet under 48 bits code length with only 40 poisoned images.
翻译:由于计算和储存效率的提高,深海散列已成为大规模图像检索中流行的一种方法。然而,最近的工程提高了深散列的安全关切。虽然现有的工程侧重于在对抗性扰动方面深海散列的脆弱性,但我们发现了更紧迫的威胁,即后门攻击,当攻击者能够获取培训数据时,我们发现了更紧迫的威胁,即后门攻击。一个后门深散列散列模型通常在原始查询图像上运行,而当触发器出现时,用目标标签返回图像,使袭击难以被发现。在本文中,我们通过使用清洁标签数据中毒来发现这种安全关切。据我们所知,这是对深散列散列模型进行后门攻击的第一次尝试。为制作有毒图像,我们首先生成目标对立的防扰动防线作为后门触发器。此外,我们提议在原始查询代码学习时,有色显示的扰动模型只能更多地了解触发因素。令人困惑的模型是无法察觉的,而且通过将图像与目标标签拆散而产生。根据我们的知识,这是第一次尝试后门攻击后门攻击式攻击模型。我们在48个目标空间下进行了平均的精确度的实验。我们可以做到。