We propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general constrained reinforcement learning problem, which is formulated as a constrained Markov decision process (CMDP) in the context of average cost. The SCAOPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problems with convex surrogate functions. At each iteration, the convex surrogate problem can be efficiently solved by Lagrange dual method even the policy is parameterized by a high-dimensional function. Moreover, the SCAOPO enables to reuse old experiences from previous updates, thereby significantly reducing the implementation cost when deployed in the real-world engineering systems that need to online learn the environment. In spite of the time-varying state distribution and the stochastic bias incurred by the off-policy learning, the SCAOPO with a feasible initial point can still provably converge to a Karush-Kuhn-Tucker (KKT) point of the original problem almost surely.
翻译:我们建议以基于政策优化的连续测算法(SCAOPO)为基础,连续测算基于政策优化的测算法(SCAOPO),以解决普遍受限强化学习问题,该算法是在平均成本方面作为限制的Markov决定程序(CMDP)制定的,其基础是解决一系列通过用Convex代用功能取代原始问题的目标和制约功能而获得的测算目标/可行性优化问题。在每一次迭代中,即使该政策的双重方法也能够以高维功能为参数来有效解决convex代谢问题。此外,SCAOPO能够重新利用以前更新的老经验,从而大大降低在实际工程系统中部署需要在线学习环境时的执行成本。尽管时间变化的分布以及非政策学习所产生的偏差偏差,具有可行初始点的SCAOPOPO几乎可以肯定地与最初问题的Karush-Kuhn-Tuck(KKKT)点一致。