Black-box adversarial attacks present a realistic threat to action recognition systems. Existing black-box attacks follow either a query-based approach where an attack is optimized by querying the target model, or a transfer-based approach where attacks are generated using a substitute model. While these methods can achieve decent fooling rates, the former tends to be highly query-inefficient while the latter assumes extensive knowledge of the black-box model's training data. In this paper, we propose a new attack on action recognition that addresses these shortcomings by generating perturbations to disrupt the features learned by a pre-trained substitute model to reduce the number of queries. By using a nearly disjoint dataset to train the substitute model, our method removes the requirement that the substitute model be trained using the same dataset as the target model, and leverages queries to the target model to retain the fooling rate benefits provided by query-based methods. This ultimately results in attacks which are more transferable than conventional black-box attacks. Through extensive experiments, we demonstrate highly query-efficient black-box attacks with the proposed framework. Our method achieves 8% and 12% higher deception rates compared to state-of-the-art query-based and transfer-based attacks, respectively.
翻译:黑盒对抗性攻击对行动识别系统构成了现实的威胁。 现有的黑盒攻击要么采用基于查询的方法,通过查询目标模式优化攻击,要么采用基于转移的方法,利用替代模式制造攻击。 虽然这些方法可以达到体面的愚弄率,但前者往往具有高度的查询效率,而后者则对黑盒模式培训数据拥有广泛了解。在本文中,我们提议采用新的行动识别方法,通过产生干扰来消除这些缺陷,从而破坏预先训练的替代模式所学会的减少查询次数的特征。通过使用几乎脱节的数据集来训练替代模型,我们的方法取消了使用与目标模型相同的数据集来训练替代模型的要求,并运用对目标模型的查询,以保留以查询为基础的方法提供的欺骗率收益。这最终导致攻击比常规黑盒攻击更具可转移性。通过广泛的实验,我们通过拟议的框架来展示高查询效率的黑盒攻击。我们的方法比州调查式攻击分别达到8%和12%的欺骗率。