With the development of adversarial attacks, adversairal examples have been widely used to enhance the robustness of the training models on deep neural networks. Although considerable efforts of adversarial attacks on improving the transferability of adversarial examples have been developed, the attack success rate of the transfer-based attacks on the surrogate model is much higher than that on victim model under the low attack strength (e.g., the attack strength $\epsilon=8/255$). In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model. Then, to eliminate such enormous difference of attack success rates for improving the transferability of generated adversarial examples, a fuzziness-tuned method consisting of confidence scaling mechanism and temperature scaling mechanism is proposed to ensure the generated adversarial examples can effectively skip out of the fuzzy domain. The confidence scaling mechanism and the temperature scaling mechanism can collaboratively tune the fuzziness of the generated adversarial examples through adjusting the gradient descent weight of fuzziness and stabilizing the update direction, respectively. Specifically, the proposed fuzziness-tuned method can be effectively integrated with existing adversarial attacks to further improve the transferability of adverarial examples without changing the time complexity. Extensive experiments demonstrated that fuzziness-tuned method can effectively enhance the transferability of adversarial examples in the latest transfer-based attacks.
翻译:随着对抗攻击的发展,对抗样本已被广泛用于增强深度神经网络训练模型的鲁棒性。尽管针对提高对抗样本可转移性的转移攻击已经做出了相当大的努力,但是在低攻击强度(例如,攻击强度 $\epsilon=8/255$)下,基于转移的攻击在代理模型上的攻击成功率远高于在受害模型上的成功率。在本文中,我们首先对这个问题进行了系统的研究,并发现代理模型和受害模型之间攻击成功率的巨大差异是由一个特殊区域(在我们的论文中称为模糊域)的存在所引起的,在该区域内的对抗样本被代理模型分类错误,但受害模型分类正确。然后,为了消除攻击成功率的巨大差异以提高生成的对抗样本的可转移性,我们提出了一种模糊度调整方法,该方法包括置信度缩放机制和温度缩放机制,以确保生成的对抗样本能够有效地跳出模糊域。置信度缩放机制和温度缩放机制可以通过调整模糊度的梯度下降权重和稳定更新方向来协同调整生成的对抗样本的模糊度。具体而言,所提出的模糊度调整方法可以有效地与现有的对抗攻击相结合,以进一步提高对抗样本的可转移性,而不改变时间复杂度。广泛的实验表明,模糊度调整方法可以有效地增强最新的基于转移攻击的对抗样本的可转移性。