We consider the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of an evaluation policy, $\pi_e$, using a fixed dataset, $\mathcal{D}$, collected by one or more policies that may be different from $\pi_e$. Current OPE algorithms may produce poor OPE estimates under policy distribution shift i.e., when the probability of a particular state-action pair occurring under $\pi_e$ is very different from the probability of that same pair occurring in $\mathcal{D}$ (Voloshin et al. 2021, Fu et al. 2021). In this work, we propose to improve the accuracy of OPE estimators by projecting the high-dimensional state-space into a low-dimensional state-space using concepts from the state abstraction literature. Specifically, we consider marginalized importance sampling (MIS) OPE algorithms which compute state-action distribution correction ratios to produce their OPE estimate. In the original ground state-space, these ratios may have high variance which may lead to high variance OPE. However, we prove that in the lower-dimensional abstract state-space the ratios can have lower variance resulting in lower variance OPE. We then highlight the challenges that arise when estimating the abstract ratios from data, identify sufficient conditions to overcome these issues, and present a minimax optimization problem whose solution yields these abstract ratios. Finally, our empirical evaluation on difficult, high-dimensional state-space OPE tasks shows that the abstract ratios can make MIS OPE estimators achieve lower mean-squared error and more robust to hyperparameter tuning than the ground ratios.
翻译:我们考虑了在强化学习(RL)中非政策评价(OPE)的问题,在强化学习(RL)中,目标在于用固定数据集来估计评价政策的业绩,$\pi_e$, 美元=pi_e$, 美元=mathcal{D}$\mathcal{D}$, 由可能与$\pi_e$不同的一种或多种政策收集,美元=mathcal{D}美元。当前的OPE算法可能会在政策分配变化(即美元=pi_e)中产生不良的OPE估计值。具体地说,当某个州-pi_e美元下的州-行动对等比率与该对等比率的概率发生非常不同时,当美元为美元/maxcal-al directive $(Volosin et al. 2021, Fual. Fu 2021, Fu.),我们提议提高OPE估计器的准确性,利用州-emoal dislateal disal dism dislations the the the distium labal disal dislational dislate the the the then then thest thest thest thest laut the laut the laut the laut thes