Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.
翻译:认证的稳健性是安全关键应用中深层神经网络的一种理想属性,而大众培训算法可以通过计算一个全球连接的Lipschitz常量来验证神经网络的稳健性。 然而,这样的约束往往松散:它倾向于过度规范神经网络,并降低其自然精确性。 更紧密的Lipschitz绑定可能为自然和认证准确性之间的更佳权衡提供更好的权衡,但通常由于网络的不稳定性而难以准确计算。 在这项工作中,我们建议采用高效和可培训的\emph{Libldlipschitz最高约束模块,考虑激活功能(如RELU)和重量矩阵之间的交互作用。具体地说,当计算加权矩阵的诱导规范时,我们删除了相应的行行和列,保证激活功能在每个特定数据点的周围是恒定的,这比全球网络的Lipschitz恒定值更加紧密。 我们的方法可以用一个插式模块来将Lipschitz绑定的精度绑在多个可验证性培训算中。 此外,我们建议用一个最精确的网络和最精确的机级的系统,我们用最精确的系统来显示一个最精确的系统, 和最精确的校正的校正的校正的校正的校正的校正的校正的校正的校正。