Deep neural networks have achieved impressive performance in various areas, but they are shown to be vulnerable to adversarial attacks. Previous works on adversarial attacks mainly focused on the single-task setting. However, in real applications, it is often desirable to attack several models for different tasks simultaneously. To this end, we propose Multi-Task adversarial Attack (MTA), a unified framework that can craft adversarial examples for multiple tasks efficiently by leveraging shared knowledge among tasks, which helps enable large-scale applications of adversarial attacks on real-world systems. More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders. Thanks to the shared encoder, MTA reduces the storage cost and speeds up the inference when attacking multiple tasks simultaneously. Moreover, the proposed framework can be used to generate per-instance and universal perturbations for targeted and non-targeted attacks. Experimental results on the Office-31 and NYUv2 datasets demonstrate that MTA can improve the quality of attacks when compared with its single-task counterpart.
翻译:深心神经网络在不同领域取得了令人印象深刻的成绩,但显示它们很容易受到对抗性攻击。以前关于对抗性攻击的工作主要侧重于单一任务设置。然而,在实际应用中,往往需要同时攻击不同任务的若干模式。为此,我们提议多任务对抗性攻击(MTA),这是一个统一框架,通过利用任务之间的共享知识,为多重任务高效率地形成对抗性例子,这有助于大规模应用对现实世界系统的对抗性攻击。更具体地说,MTA使用一个对抗性侵入性侵入性发电机,由所有任务和多重任务专用解密器的共用编码器组成。由于共用编码器,MTA降低了储存成本,加快了同时攻击多重任务时的推断速度。此外,拟议的框架可以用来产生针对目标和非目标攻击的常态和普遍干扰。Office-31和NYUv2数据集的实验结果表明,与单一任务对应方相比,MTA可以提高攻击的质量。