There is now extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, motivating the development of defenses against adversarial attacks. However, existing adversarial defenses typically improve model robustness against individual specific perturbation types. Some recent methods improve model robustness against adversarial attacks in multiple $\ell_p$ balls, but their performance against each perturbation type is still far from satisfactory. To better understand this phenomenon, we propose the \emph{multi-domain} hypothesis, stating that different types of adversarial perturbations are drawn from different domains. Guided by the multi-domain hypothesis, we propose \emph{Gated Batch Normalization (GBN)}, a novel building block for deep neural networks that improves robustness against multiple perturbation types. GBN consists of a gated sub-network and a multi-branch batch normalization (BN) layer, where the gated sub-network separates different perturbation types, and each BN branch is in charge of a single perturbation type and learns domain-specific statistics for input transformation. Then, features from different branches are aligned as domain-invariant representations for the subsequent layers. We perform extensive evaluations of our approach on MNIST, CIFAR-10, and Tiny-ImageNet, and demonstrate that GBN outperforms previous defense proposals against multiple perturbation types, i.e, $\ell_1$, $\ell_2$, and $\ell_{\infty}$ perturbations, by large margins of 10-20\%.


翻译:现在有广泛的证据表明,深神经网络很容易受到对抗性攻击的例子的影响,从而鼓励发展对抗性攻击的防御。然而,现有的对抗性防御通常能改善针对个别特定扰动类型的模型强度。最近的一些方法可以改善多美元_p$球对对抗性攻击的模型强度,但对于每个扰动类型而言,其性能仍然远远不能令人满意。为了更好地了解这一现象,我们提议了以下假设: emph{multi-domain}假设,指出不同类型的对抗性扰动是从不同领域抽取的。在多数据假设的指导下,我们提议要改进针对个别特定扰动类型的模型强度。一些最近的方法可以改善多元的对立性攻击的模型强度。GBN是由闭门子网络和多链式组合(BN)组成的。 闭门子网络将不同类型的对美元扰动性交易类型分开,每个BNB分支负责单一的透性类型,并学习用于投入性变换的域域域统计,然后我们用不同的内部结构进行不同的内部评估。

0
下载
关闭预览

相关内容

专知会员服务
45+阅读 · 2020年10月31日
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
已删除
将门创投
4+阅读 · 2018年6月4日
Arxiv
38+阅读 · 2020年3月10日
Deflecting Adversarial Attacks
Arxiv
8+阅读 · 2020年2月18日
Adversarial Transfer Learning
Arxiv
12+阅读 · 2018年12月6日
VIP会员
相关VIP内容
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
已删除
将门创投
4+阅读 · 2018年6月4日
Top
微信扫码咨询专知VIP会员