Offline reinforcement learning (RL) aims to learn the optimal policy from a pre-collected dataset without online interactions. Most of the existing studies focus on distributional shift caused by out-of-distribution actions. However, even in-distribution actions can raise serious problems. Since the dataset only contains limited information about the underlying model, offline RL is vulnerable to spurious correlations, i.e., the agent tends to prefer actions that by chance lead to high returns, resulting in a highly suboptimal policy. To address such a challenge, we propose a practical and theoretically guaranteed algorithm SCORE that reduces spurious correlations by combing an uncertainty penalty into policy evaluation. We show that this is consistent with the pessimism principle studied in theory, and the proposed algorithm converges to the optimal policy with a sublinear rate under mild assumptions. By conducting extensive experiments on existing benchmarks, we show that SCORE not only benefits from a solid theory but also obtains strong empirical results on a variety of tasks.
翻译:离线强化学习(RL)旨在从一个没有在线互动的预收集数据集中学习最佳政策。大多数现有研究侧重于分配外行动造成的分配转移。但是,即使是在分配中的行动也会引起严重问题。由于数据集只包含关于基本模型的有限信息,离线强化学习(RL)容易受到虚假关联的影响,即代理商往往偏爱偶然导致高回报的行动,从而导致高度次优的政策。为了应对这一挑战,我们建议采用实用和理论上有保障的SCORE算法,通过在政策评估中将不确定性的处罚重新组合来减少虚假的关联。我们表明,这符合理论研究的悲观原则,而拟议的算法则在轻度假设下与一个子线性比率相融合。我们通过对现有基准进行广泛的实验,表明SCORE不仅从一个扎实的理论中获益,而且还在各种任务上取得了有力的经验结果。