This paper presents a theoretical overview of a Neural Contraction Metric (NCM): a neural network model of an optimal contraction metric and corresponding differential Lyapunov function, the existence of which is a necessary and sufficient condition for incremental exponential stability of non-autonomous nonlinear system trajectories. Its innovation lies in providing formal robustness guarantees for learning-based control frameworks, utilizing contraction theory as an analytical tool to study the nonlinear stability of learned systems via convex optimization. In particular, we rigorously show in this paper that, by regarding modeling errors of the learning schemes as external disturbances, the NCM control is capable of obtaining an explicit bound on the distance between a time-varying target trajectory and perturbed solution trajectories, which exponentially decreases with time even under the presence of deterministic and stochastic perturbation. These useful features permit simultaneous synthesis of a contraction metric and associated control law by a neural network, thereby enabling real-time computable and probably robust learning-based control for general control-affine nonlinear systems.
翻译:本文从理论角度概述了神经收缩模型(NCM):一个最佳收缩指标和相应的Lyapunov差异功能神经网络模型,其存在是非自主非线性系统轨迹逐渐指数稳定的一个必要和充分的条件,其创新在于为基于学习的控制框架提供正式的稳健性保障,利用收缩理论作为分析工具,通过锥形优化研究所学系统的非线性稳定性。特别是,我们在本文件中严格表明,通过将学习计划错误作为外部扰动进行模拟,NCM控制能够对时间变化目标轨迹和周遭溶液轨迹之间的距离进行明确约束,即使存在确定性和随机性,但随着时间的推移,这种限制也会急剧减少。这些有用的特征使得可以通过神经网络同时合成收缩指数和相关的控制法,从而能够实时对非线性控制系统进行可比较和可能强有力的学习控制。