Most prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation. In this paper we show that simply doing one step of constrained/regularized policy improvement using an on-policy Q estimate of the behavior policy performs surprisingly well. This one-step algorithm beats the previously reported results of iterative algorithms on a large portion of the D4RL benchmark. The one-step baseline achieves this strong performance while being notably simpler and more robust to hyperparameters than previously proposed iterative algorithms. We argue that the relatively poor performance of iterative approaches is a result of the high variance inherent in doing off-policy evaluation and magnified by the repeated optimization of policies against those estimates. In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.
翻译:多数前线外强化学习方法(RL)都采用了涉及非政策评价的迭代行为体-批评方法。在本文中,我们表明,仅仅利用对行为政策的在政策上的Q估计来采取限制/正规化政策改进的一个步骤就表现得令人惊讶。这一一步的算法比以前报告的在大部分D4RL基准上的迭代算法的结果要好。一步骤的基线取得了这一强效,但比以前提议的迭代算法明显更简单、更坚固。我们争辩说,迭代方法的相对不良性能是由于在进行非政策评价方面固有的差异很大,并且由于针对这些估计一再优化政策而放大了这种差异。此外,我们假设一步骤算法的强劲性能是由于环境和行为政策的有利结构的结合。