This work systematically investigates the adversarial robustness of deep image denoisers (DIDs), i.e, how well DIDs can recover the ground truth from noisy observations degraded by adversarial perturbations. Firstly, to evaluate DIDs' robustness, we propose a novel adversarial attack, namely Observation-based Zero-mean Attack ({\sc ObsAtk}), to craft adversarial zero-mean perturbations on given noisy images. We find that existing DIDs are vulnerable to the adversarial noise generated by {\sc ObsAtk}. Secondly, to robustify DIDs, we propose an adversarial training strategy, hybrid adversarial training ({\sc HAT}), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth. The resultant DIDs can effectively remove various types of synthetic and adversarial noise. We also uncover that the robustness of DIDs benefits their generalization capability on unseen real-world noise. Indeed, {\sc HAT}-trained DIDs can recover high-quality clean images from real-world noise even without training on real noisy data. Extensive experiments on benchmark datasets, including Set68, PolyU, and SIDD, corroborate the effectiveness of {\sc ObsAtk} and {\sc HAT}.
翻译:这项工作系统地调查深图像隐居者(DIDs)的对抗性强度,即,DIDs如何能从因对抗性扰动而退化的吵吵的观察中恢复地面真相。首先,为了评估DDS的稳健性,我们提议进行新的对抗性攻击,即基于观察的零中位攻击(sc ObsAttk}),对噪音图像进行对抗性零中位扰动。我们发现,现有的DADs很容易受到xsc ObsAtk}产生的对抗性噪音的影响。第二,为了加强DADS,我们提议了一项对抗性训练战略、混合对抗性对抗性训练(sc HAT}),用对抗性和非对抗性吵动性数据联合训练,以确保重建质量高,非对抗性数据周围的不稳健。由此产生的DA可以有效地消除各种合成和对抗性噪音。我们还发现,DADS的稳健性能有利于其在无形现实世界噪音方面的普遍化能力。事实上的HATs,甚至没有HAT的HAT(HAT)和MIT-nialims Styal-nial Strealbild),包括SDDDDDRDDS,可以恢复真实的可靠数据。