The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios. Although several methods have been proposed to enhance cross-model transferability, little attention has been paid to the transferability of adversarial examples across different tasks. This issue has become increasingly relevant with the emergence of foundational multi-task AI systems such as Visual ChatGPT, rendering the utility of adversarial samples generated by a single task relatively limited. Furthermore, these systems often entail inferential functions beyond mere recognition-like tasks. To address this gap, we propose a novel Visual Relation-based cross-task Adversarial Patch generation method called VRAP, which aims to evaluate the robustness of various visual tasks, especially those involving visual reasoning, such as Visual Question Answering and Image Captioning. VRAP employs scene graphs to combine object recognition-based deception with predicate-based relations elimination, thereby disrupting the visual reasoning information shared among inferential tasks. Our extensive experiments demonstrate that VRAP significantly surpasses previous methods in terms of black-box transferability across diverse visual reasoning tasks.
翻译:对抗性样本的可转移性是评估深度学习系统鲁棒性的一个关键方面,特别是在黑盒场景中。虽然已经提出了几种方法来增强模型间的可转移性,但对于对抗样本在不同任务之间的转移性关注较少。随着基于多任务的 AI 系统的出现,如 Visual ChatGPT,单个任务生成的对抗样本的实用性变得相对有限,因此此问题变得越来越重要。此外,这些系统往往涉及除仅仅识别任务以外的推理功能。为了解决这个问题,我们提出了一种基于视觉关系的跨任务对抗性补丁生成方法,称为 VRAP。VRAP 旨在评估各种视觉任务的鲁棒性,特别是包括视觉问答和图像描述等视觉推理任务。VRAP 利用场景图将基于物体识别的欺骗与基于谓词的关系消除相结合,从而破坏各推理任务之间共享的视觉推理信息。我们的广泛实验表明,VRAP 在跨多样化的视觉推理任务的黑盒可转移性方面显著优于以往方法。