We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms. While most existing works on actor-critic employ bi-level or two-timescale updates, we focus on the more practical single-timescale setting, where the actor and critic are updated simultaneously. Specifically, in each iteration, the critic update is obtained by applying the Bellman evaluation operator only once while the actor is updated in the policy gradient direction computed using the critic. Moreover, we consider two function approximation settings where both the actor and critic are represented by linear or deep neural networks. For both cases, we prove that the actor sequence converges to a globally optimal policy at a sublinear $O(K^{-1/2})$ rate, where $K$ is the number of iterations. To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time. Moreover, under the broader scope of policy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear rate for the first time.
翻译:我们研究的是演员-评论家的全球趋同性和全球最佳性,这是最受欢迎的强化学习算法家庭之一。虽然大多数关于演员-评论家的现有工作都采用双级或双级更新,但我们侧重于更实用的单一时间尺度设置,即演员和评论家同时更新。具体地说,在每次迭代中,评论家更新时只应用Bellman评价操作员一次,而该行为家则在使用评论家计算的政策梯度方向上更新。此外,我们还考虑两种功能近似设置,即行为者和评论家都由线性或深层神经网络代表。在这两种情况下,我们证明行为者-评论家的顺序以亚线性O(K ⁇ -1/2})美元费率趋同于全球最佳政策,而KK$是迭代数。我们最了解的是,我们建立了单一时间级的演员-评论员-评论家与线性功能第一次近似。此外,在较广的政策优化范围下,非线性功能近近似,我们证明具有深线性神经网络的行为者-评论家第一次发现全球最佳政策比率。