Batch reinforcement learning (RL) aims at finding an optimal policy in a dynamic environment in order to maximize the expected total rewards by leveraging pre-collected data. A fundamental challenge behind this task is the distributional mismatch between the batch data generating process and the distribution induced by target policies. Nearly all existing algorithms rely on the absolutely continuous assumption on the distribution induced by target policies with respect to the data distribution so that the batch data can be used to calibrate target policies via the change of measure. However, the absolute continuity assumption could be violated in practice, especially when the state-action space is large or continuous. In this paper, we propose a new batch RL algorithm without requiring absolute continuity in the setting of an infinite-horizon Markov decision process with continuous states and actions. We call our algorithm STEEL: SingulariTy-awarE rEinforcement Learning. Our algorithm is motivated by a new error analysis on off-policy evaluation, where we use maximum mean discrepancy, together with distributionally robust optimization, to characterize the error of off-policy evaluation caused by the possible singularity and to enable the power of model extrapolation. By leveraging the idea of pessimism and under some mild conditions, we derive a finite-sample regret guarantee for our proposed algorithm without imposing absolute continuity. Compared with existing algorithms, STEEL only requires some minimal data-coverage assumption and thus greatly enhances the applicability and robustness of batch RL. Extensive simulation studies and one real experiment on personalized pricing demonstrate the superior performance of our method when facing possible singularity in batch RL.
翻译:批量加固学习(RL)的目的是在动态环境中找到最佳政策,以便通过利用预先收集的数据,最大限度地实现预期的总回报。任务背后的一个基本挑战是批量数据生成流程与目标政策引发的分布不匹配。几乎所有现有算法都依赖数据分配目标政策引发的分配绝对连续的假设,以便批量数据能够用于通过改变计量来校准目标政策。然而,绝对连续性假设在实践中可能违反,特别是在国家行动空间巨大或持续的情况下。在本文件中,我们提议一个新的批量RL算法,而无需在设定无限和偏差的马尔科夫决策过程中以持续状态和行动进行绝对的连续性。我们称我们的SteEEL算法:SingulariTy-awarE rEinforcement Tainment。我们的算法是基于新的错误分析,我们使用的是最大平均值差异,加上分布最强的优化,以描述某些可能的奇特性导致的偏离政策评价错误,并使模型外差力具有绝对的连续性。我们用一个模型进行精确性分析,从而在不增加我们目前的标准定价的准确性假设下,要求提高我们目前提出的准确的精确的运费。