In off-policy reinforcement learning, a behaviour policy performs exploratory interactions with the environment to obtain state-action-reward samples which are then used to learn a target policy that optimises the expected return. This leads to a problem of off-policy evaluation, where one needs to evaluate the target policy from samples collected by the often unrelated behaviour policy. Importance sampling is a traditional statistical technique that is often applied to off-policy evaluation. While importance sampling estimators are unbiased, their variance increases exponentially with the horizon of the decision process due to computing the importance weight as a product of action probability ratios, yielding estimates with low accuracy for domains involving long-term planning. This paper proposes state-based importance sampling, which drops the action probability ratios of sub-trajectories with ``negligible states'' -- roughly speaking, those for which the chosen actions have no impact on the return estimate -- from the computation of the importance weight. Theoretical results show this reduces the ordinary importance sampling variance from $O(\exp(H))$ to $O(\exp(X))$ where $X < H$ is the largest subtrajectory with non-negligible states. To identify negligible states, two search algorithms are proposed, one based on covariance testing and one based on state-action values. We formulate state-based variants of ordinary importance sampling, weighted importance sampling, per-decision importance sampling, incremental importance sampling, doubly robust off-policy evaluation, and stationary density ratio estimation. Experiments in four distinct domains show that state-based methods consistently yield reduced variance and improved accuracy compared to their traditional counterparts.
翻译:暂无翻译