Learning-based methods could provide solutions to many of the long-standing challenges in control. However, the neural networks (NNs) commonly used in modern learning approaches present substantial challenges for analyzing the resulting control systems' safety properties. Fortunately, a new body of literature could provide tractable methods for analysis and verification of these high dimensional, highly nonlinear representations. This tutorial first introduces and unifies recent techniques (many of which originated in the computer vision and machine learning communities) for verifying robustness properties of NNs. The techniques are then extended to provide formal guarantees of neural feedback loops (e.g., closed-loop system with NN control policy). The provided tools are shown to enable closed-loop reachability analysis and robust deep reinforcement learning.
翻译:以学习为基础的方法可以为许多长期存在的控制挑战提供解决办法,然而,现代学习方法中常用的神经网络(NNs)对分析由此产生的控制系统的安全特性提出了重大挑战;幸运的是,新的文献体系可以为分析和核实这些高度非线性高维的高度非线性表示提供可移植的方法;这一指导性先介绍并统一了用于核实NNs稳健性的最新技术(其中许多源自计算机视觉和机器学习社区);随后,这些技术推广到为神经回馈回路提供正式保障(例如带有NN控制政策的闭环系统);所提供的工具显示能够进行闭路可及性分析和强有力的强化学习。