We focus on the robustness of neural networks for classification. To permit a fair comparison between methods to achieve robustness, we first introduce a standard based on the mensuration of a classifier's degradation. Then, we propose natural perturbed training to robustify the network. Natural perturbations will be encountered in practice: the difference of two images of the same object may be approximated by an elastic deformation (when they have slightly different viewing angles), by occlusions (when they hide differently behind objects), or by saturation, Gaussian noise etc. Training some fraction of the epochs on random versions of such variations will help the classifier to learn better. We conduct extensive experiments on six datasets of varying sizes and granularity. Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images. It even improves general robustness on perturbations not seen during the training. For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance. Ablation studies verify the effectiveness of natural perturbed training.
翻译:我们注重神经网络的稳健性,以便分类。为了能够公平地比较实现稳健性的方法,我们首先采用基于分类器退化的光度的测量标准。然后,我们提出自然扰动培训,以巩固网络。自然扰动在实践中会遇到:同一对象的两种图像的差别可能通过弹性变形(当它们有轻微不同的视觉角度)、隔离(当它们隐藏在物体后面时)或通过饱和、高斯噪音等来近似(如果它们隐藏在物体后面时),或者通过对饱和、高斯噪音等。在这种变异的随机版本上对一些小区进行培训,将有助于分类者更好地学习。我们对不同大小和颗粒的六套数据集进行广泛的实验。自然扰动学习显示,在清洁、对抗和自然扰动图像上的对抗性训练,其性能可能比对抗性训练好,甚至比在训练期间看不到的扰动一般强。对于Cifar-10和STL-10的自然扰动训练,甚至提高清洁数据的准确性能,并达到艺术性能的状态。