This manuscript surveys reinforcement learning from the perspective of optimization and control with a focus on continuous control applications. It surveys the general formulation, terminology, and typical experimental implementations of reinforcement learning and reviews competing solution paradigms. In order to compare the relative merits of various techniques, this survey presents a case study of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and best studied problem in optimal control. The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior. In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms. This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and controls might be combined to approach these challenges.
翻译:该手稿调查从优化和控制的角度加强学习,重点是持续控制应用;调查强化学习的一般表述、术语和典型实验性实践,并审查相互竞争的解决办法范式;为了比较各种技术的相对优点,本调查对线性夸拉蒂管理器(LQR)进行了个案研究,研究的动力未知,也许是最佳控制方面最简单、研究最好的问题;该手稿说明了学习理论和控制的合并技术如何提供LQR性能的非同步特征,并表明这些特征往往与实验性行为相匹配;反过来,在重新审视较复杂的应用时,LQR中观察到的许多现象依然存在;特别是,理论和实验表明模型的作用和重要性,以及一般性成本在强化学习算法方面。该调查最后讨论了设计学习系统方面的一些挑战,这些系统与复杂和不确定的环境有安全和可靠的互动,以及如何将加强学习和控制的工具结合起来,以应对这些挑战。