We propose a deep neural network architecture and a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.
翻译:我们提出一个深层神经网络架构和培训算法,用于计算非线性普通差分方程式系统近似Lyapunov功能的计算。根据这个假设,这个系统接受一个构成性的 Lyapunov 函数,我们证明,近似具有固定精确性的Lyapunov 函数所需的神经元数量在国家层面只能从多角度增长,也就是说,拟议方法能够克服维度的诅咒。我们表明,满足一个微量条件的非线性系统接纳了构成性的Lyapunov 函数。 最多10个空间层面的数字示例说明了培训计划的绩效。