We explore reinforcement learning methods for finding the optimal policy in the linear quadratic regulator (LQR) problem. In particular, we consider the convergence of policy gradient methods in the setting of known and unknown parameters. We are able to produce a global linear convergence guarantee for this approach in the setting of finite time horizon and stochastic state dynamics under weak assumptions. The convergence of a projected policy gradient method is also established in order to handle problems with constraints. We illustrate the performance of the algorithm with two examples. The first example is the optimal liquidation of a holding in an asset. We show results for the case where we assume a model for the underlying dynamics and where we apply the method to the data directly. The empirical evidence suggests that the policy gradient method can learn the global optimal solution for a larger class of stochastic systems containing the LQR framework and that it is more robust with respect to model mis-specification when compared to a model-based approach. The second example is an LQR system in a higher dimensional setting with synthetic data.
翻译:我们探索强化学习方法,以寻找线性二次调节器(LQR)问题的最佳政策。特别是,我们考虑在确定已知和未知参数时政策梯度方法的趋同性;我们能够在确定有限时间跨度和脆弱假设下的随机状态动态时,为这一方法提供全球线性趋同保证。还建立了预测政策梯度方法的趋同性,以便处理制约问题。我们用两个例子来说明算法的性能。第一个例子是资产持有量的最佳清算。我们展示了我们假设基本动态模型并将该方法直接应用于数据的情况的结果。经验证据表明,政策梯度方法可以为包含LQR框架的较大类随机系统学习全球最佳解决方案,而且与基于模型的方法相比,该方法对于模型的错误区分性比较更为有力。第二个例子是在合成数据的更高维度设置中的LQR系统。