The majority of adversarial machine learning research focuses on additive attacks, which add adversarial perturbation to input data. On the other hand, unlike image recognition problems, only a handful of attack approaches have been explored in the video domain. In this paper, we propose a novel attack method against video recognition models, Multiplicative Adversarial Videos (MultAV), which imposes perturbation on video data by multiplication. MultAV has different noise distributions to the additive counterparts and thus challenges the defense methods tailored to resisting additive adversarial attacks. Moreover, it can be generalized to not only Lp-norm attacks with a new adversary constraint called ratio bound, but also different types of physically realizable attacks. Experimental results show that the model adversarially trained against additive attack is less robust to MultAV.
翻译:大部分对抗性机器学习研究的重点是添加攻击,这增加了输入数据中的对抗性扰动。另一方面,与图像识别问题不同,在视频领域只探索了少数攻击方法。在本文中,我们提出了一种针对视频识别模型的新颖攻击方法,即倍增反射视频(MultaV),通过乘法对视频数据造成干扰。MultaV向添加性对等方发出不同的噪音分布,从而挑战为抵抗添加性对抗性攻击而设计的防御方法。此外,它可以被广泛推广到不仅使用所谓的约束比率的新对手约束的Lp-Norm攻击,而且还有不同类型的实际可实现的攻击。实验结果显示,对添加性攻击进行敌对性训练的模型对MultaV没有那么强大。