The research on single image dehazing task has been widely explored. However, as far as we know, no comprehensive study has been conducted on the robustness of the well-trained dehazing models. Therefore, there is no evidence that the dehazing networks can resist malicious attacks. In this paper, we focus on designing a group of attack methods based on first order gradient to verify the robustness of the existing dehazing algorithms. By analyzing the general goal of image dehazing task, five attack methods are proposed, which are prediction, noise, mask, ground-truth and input attack. The corresponding experiments are conducted on six datasets with different scales. Further, the defense strategy based on adversarial training is adopted for reducing the negative effects caused by malicious attacks. In summary, this paper defines a new challenging problem for image dehazing area, which can be called as adversarial attack on dehazing networks (AADN). Code is available at https://github.com/guijiejie/AADN.
翻译:去雾网络的对抗性攻击和防御
单幅图像去雾任务的研究已经得到广泛探索。然而,据我们所知,对于训练良好的去雾模型的稳健性并未进行全面的研究。因此,没有证据表明去雾网络能够抵御恶意攻击。本文着重设计了一组基于一阶梯度的攻击方法,以验证现有去雾算法的鲁棒性。通过分析图像去雾任务的一般目标,提出了五种攻击方法,分别为预测、噪音、掩膜、真值和输入攻击。在不同规模的六个数据集上进行了相应的实验。此外,采用基于对抗训练的防御策略,以减少恶意攻击造成的负面影响。总之,本文定义了一个新的图像去雾领域的挑战性问题,可以称为去雾网络的对抗性攻击(AADN)。代码可在https://github.com/guijiejie/AADN 上获得。