This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks based upon non-Euclidean contraction theory. The basic idea is to cast the robustness analysis of a neural network as a reachability problem and use (i) the $\ell_{\infty}$-norm input-output Lipschitz constant and (ii) the tight inclusion function of the network to over-approximate its reachable sets. First, for a given implicit neural network, we use $\ell_{\infty}$-matrix measures to propose sufficient conditions for its well-posedness, design an iterative algorithm to compute its fixed points, and provide upper bounds for its $\ell_\infty$-norm input-output Lipschitz constant. Second, we introduce a related embedded network and show that the embedded network can be used to provide an $\ell_\infty$-norm box over-approximation of the reachable sets of the original network. Moreover, we use the embedded network to design an iterative algorithm for computing the upper bounds of the original system's tight inclusion function. Third, we use the upper bounds of the Lipschitz constants and the upper bounds of the tight inclusion functions to design two algorithms for the training and robustness verification of implicit neural networks. Finally, we apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
翻译:本文提出一个理论和计算框架,用于培训和根据非欧洲的收缩理论对隐性神经网络进行培训和稳健性核查。 基本想法是将神经网络的稳健性分析作为可达性问题, 并使用 (一) $\ ell\\ incinfty}$- norm 输入- 输出 Lipschitz 常数 和 (二) 网络的紧凑包容功能, 以超近其可达性机组。 首先, 对于特定的隐性神经网络, 我们使用$\ell\ incinfty}$-matrix 措施, 以提出充分性的条件, 设计一个计算其固定点的迭代算法, 并提供其 $\ ellüinfty$- nonorm 输入- 输入- put Lipschitzt 常数常数 常数 常数 常数 。 其次, 我们采用嵌式网络的隐性变硬性算法, 将我们所了解的内含性内置性内置性内置的内置性内脏内程的内程的内程计算, 我们用我们的内置网络的内置内置的内置的内置的内置式 和内置的内置的内置的内置的内程的内程的内程的内校程的内校程的内校程的内校程的内校程功能应用。