Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability can have severe consequences in applications which influence the health and safety of humans, e.g., biomedical imaging or autonomous driving. While bounding the Lipschitz constant of a neural network improves stability, most methods rely on restricting the Lipschitz constants of each layer which gives a poor bound for the actual Lipschitz constant. In this paper we investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network, which can easily be integrated into the training procedure. We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network. Finally, we numerically evaluate our method on both a nonlinear regression problem and the MNIST and Fashion-MNIST classification databases, and compare our results with a weight regularization approach.
翻译:尽管近年来深层神经网络(DNN)取得了巨大成功,但大多数神经网络在稳定性方面仍然缺乏数学保障。例如,DNN很容易受到小的、甚至无法察觉的输入干扰,即所谓的对抗性例子,这可能造成虚假预测。这种不稳定在影响人类健康和安全的应用中可能产生严重后果,例如生物医学成像或自主驾驶。在将Lipschitz神经网络常数捆绑在一起的同时,多数方法依靠限制每个层的Lipschitz常数,这种常数给实际的Lipschitz常数带来困难。在本文中,我们调查了一种名为CLIP的变式正规化方法,用于控制Lipschitz神经网络常数,这很容易被纳入培训程序。我们从数学角度分析了拟议的模型,特别是讨论了所选择的正规化参数对网络输出的影响。最后,我们用数字评估了我们关于非线性回归问题和MNIST和FAshion-MNIST分类数据库的方法,并将我们的结果与重量正规化方法进行比较。