Offline reinforcement learning (RL) has increasingly become the focus of the artificial intelligent research due to its wide real-world applications where the collection of data may be difficult, time-consuming, or costly. In this paper, we first propose a two-fold taxonomy for existing offline RL algorithms from the perspective of exploration and exploitation tendency. Secondly, we derive the explicit expression of the upper bound of extrapolation error and explore the correlation between the performance of different types of algorithms and the distribution of actions under states. Specifically, we relax the strict assumption on the sufficiently large amount of state-action tuples. Accordingly, we provably explain why batch constrained Q-learning (BCQ) performs better than other existing techniques. Thirdly, after identifying the weakness of BCQ on dataset of low mean episode returns, we propose a modified variant based on top return selection mechanism, which is proved to be able to gain state-of-the-art performance on various datasets. Lastly, we create a benchmark platform on the Atari domain, entitled RL easy go (RLEG), at an estimated cost of more than 0.3 million dollars. We make it open-source for fair and comprehensive competitions between offline RL algorithms with complete datasets and checkpoints being provided.
翻译:离线强化学习(RL)日益成为人工智能研究的重点,因为其具有广泛的现实应用,数据收集可能困难、耗时或费用昂贵。在本文中,我们首先从勘探和开发趋势的角度提出现有离线RL算法的双重分类。第二,我们从外推误的上层角度明确表达外推错误的上层界限,并探讨不同类型算法的性能和各州行动分布之间的关联。具体地说,我们放松了对数量足够大的国家行动图例的严格假设。因此,我们可以令人理解的是,为什么批量限制Q学习(BCQ)比其他现有技术表现更好。第三,在查明BCQ在低中值回数数据集上的弱点之后,我们提出了基于顶层返回选择机制的修改变式,这已证明能够在各种数据集上取得最先进的性能。最后,我们为阿塔里域域创建了一个基准平台,题为“RL容易去”,估计成本超过30万美元。我们通过开放源数据站和完整地在RL检查站之间进行开放和全面竞争。