Despite the fact that deep neural networks (DNNs) have achieved prominent performance in various applications, it is well known that DNNs are vulnerable to adversarial examples/samples (AEs) with imperceptible perturbations in clean/original samples. To overcome the weakness of the existing defense methods against adversarial attacks, which damages the information on the original samples, leading to the decrease of the target classifier accuracy, this paper presents an enhanced countering adversarial attack method IDFR (via Input Denoising and Feature Restoring). The proposed IDFR is made up of an enhanced input denoiser (ID) and a hidden lossy feature restorer (FR) based on the convex hull optimization. Extensive experiments conducted on benchmark datasets show that the proposed IDFR outperforms the various state-of-the-art defense methods, and is highly effective for protecting target models against various adversarial black-box or white-box attacks. \footnote{Souce code is released at: \href{https://github.com/ID-FR/IDFR}{https://github.com/ID-FR/IDFR}}
翻译:尽管深心神经网络(DNN)在各种应用中取得了显著的成绩,但众所周知,DNN在清洁/原原样样本中很容易受到对抗性例子/样本(AEs)的伤害。为了克服现有防御方法在对抗性攻击方面的弱点,这种弱点破坏了原始样本的信息,导致目标分类器准确性下降,本文件展示了一种强化的对抗性对抗性攻击方法(通过输入拒绝和功能恢复),拟议的以色列国防军(DUNR)是由基于对convex船体进行优化的强化输入分解器(ID)和隐藏的损耗地物恢复器(FR)组成的。在基准数据集上进行的广泛实验表明,拟议的以色列国防军R超越了各种最新防御方法,对于保护目标模型免遭各种对抗性黑盒或白箱攻击非常有效。 \footote{Socue {Souce代码发布在以下:\ href{https://github.com/ID-FRR_I/IDFRIS_Qr_GIS_Q_GRU.