Recent researches have suggested that the predictive accuracy of neural network may contend with its adversarial robustness. This presents challenges in designing effective regularization schemes that also provide strong adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a unifying regularization principle, we propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme that aims at improving the generalization ability and adversarial robustness of the trained model. ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label. The ALPS regularization objective is formulated as a min-max problem, in which the outer problem is minimizing an upper-bound of the VRM loss, and the inner problem is L$_1$-ball constrained adversarial labelling on perturbed sample. The analytic solution to the induced inner maximization problem is elegantly derived, which enables computational efficiency. Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance while also serving as an effective adversarial training scheme.
翻译:最近的研究显示,神经网络的预测准确性可能与其对抗性强健性相冲突,这在设计有效的正规化计划方面也具有很强的对抗性强健性方面提出了挑战。重新研究将降低间接风险(VRM)作为统一的正规化原则,我们提议对受围样品进行自动标签,以此作为一种正规化计划,目的是提高受过训练的模型的普及能力和对抗性强性。ALPS用合成样本对每个真实输入样本与另一个样本相对,加上一个对抗性标签,将合成样本对立神经网络进行培训。ALPS的正规化目标是一个小问题,其中,外部问题正在最大限度地减少VRM损失的上限,而内部问题是四周样品的防扰动性标定值为L$1美元,受限制。对诱导的内部最大化问题的分析性解决方案是优美的,能够实现计算效率。SVHN、CIFAR-10、CIFAR-100和Tiny-ImageNet的实验表明,ALPS在作为有效正规化计划的同时,也是一种州际正规化计划。