Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the real world. Although adversarial training (AT) is currently one of the most effective methodologies to robustify DNNs, it is computationally very expensive (e.g., 5-10X costlier than standard training). To address this challenge, existing approaches focus on single-step AT, referred to as Fast AT, reducing the overhead of adversarial example generation. Unfortunately, these approaches are known to fail against stronger adversaries. To make AT computationally efficient without compromising robustness, this paper takes a different view of the efficient AT problem. Specifically, we propose to minimize redundancies at the data level by leveraging data pruning. Extensive experiments demonstrate that the data pruning based AT can achieve similar or superior robust (and clean) accuracy as its unpruned counterparts while being significantly faster. For instance, proposed strategies accelerate CIFAR-10 training up to 3.44X and CIFAR-100 training to 2.02X. Additionally, the data pruning methods can readily be reconciled with existing adversarial acceleration tricks to obtain the striking speed-ups of 5.66X and 5.12X on CIFAR-10, 3.67X and 3.07X on CIFAR-100 with TRADES and MART, respectively.
翻译:深心神经网络(DNNS)对对抗性实例十分敏感,在现实世界中造成脆弱和不可靠的业绩。虽然对抗性培训(AT)目前是巩固DNS的最有效方法之一,但它在计算上非常昂贵(例如,5-10x比标准培训更昂贵)。为了应对这一挑战,现有方法侧重于单步的ATT,称为快速AT,减少对抗性实例生成的间接负担。不幸的是,这些方法已知对较强的对手而言是失败的。为了使AT在计算上效率高而不损害强力,本文对有效的AT问题持不同的看法。具体地说,我们提议通过利用数据运行最大限度地减少数据上的冗余。广泛的实验表明,基于AT的数据运行能够实现类似或超强(和清洁)的准确性,因为其未运行的对等,同时大大加快速度。例如,拟议的战略将CIFAR-10培训加速到3.44X,使CIFAR-100培训达到2.02.X。此外,数据运行方法可以随时与现有的对抗性加速性加速技术加速技术加速技术系统3.66和5X的冲击式RA-10。和5X的CIARA-10。</s>