Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
翻译:视频分类系统容易受到对抗性攻击的影响,这可能会在视频验证中造成严重的安全问题。当前的黑盒攻击需要大量查询才能成功,从而在攻击过程中导致高计算开销。另一方面,对特定扰动的攻击在防御措施(如去噪或对抗性训练)方面不起作用。在本文中,我们专注于未受限制的扰动,并提出了通过样式转移的黑盒视频对抗攻击“StyleFool”,以愚弄视频分类系统。 StyleFool首先利用颜色主题接近度来选择最佳的样式图像,这有助于避免样式化视频中的不自然细节。同时,在有目标的攻击中,还额外考虑目标类别置信度,通过将样式化视频移动到甚至越过决策边界来影响分类器的输出分布。然后采用无梯度的方法进一步优化对抗扰动。我们进行了广泛的实验,评估了StyleFool对两个标准数据集UCF-101和HMDB-51的表现。实验结果表明,StyleFool在查询数量和对现有防御措施的抵抗力方面均优于最先进的对抗性攻击。此外,50%的无目标攻击的样式化视频不需要任何查询,因为它们已经可以愚弄视频分类模型。此外,我们通过用户研究来评估Rock样式的不可区分性,结果表明,尽管存在未受限制的扰动,StyleFool的对抗样本在人类的视觉下看起来是不可察觉的。