Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model's robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
翻译:以模型为基础的方法最近显示出了非政策评价的巨大潜力; 行为政策引发的离线轨迹适应了Markov决策程序的转型,用于推出模拟轨迹和估计政策绩效。 以模型为基础的OPE方法面临两大挑战。 首先,由于离线轨迹通常是固定的,它们往往覆盖有限的状态和行动空间。 其次,基于模型的方法的性能可以对其参数的初始化十分敏感。 在这项工作中,我们提议了变式潜在分支模型(VLBM),通过将环境动态设计成一个紧凑的潜在空间来学习MDP的过渡功能,然后从中抽出下一个州和奖赏。具体地说,VLBM利用并扩展了与经常性状态对齐的变异引力框架。 设计这种变式方法的目的是通过简化变式(编码)和变式分解(脱色)方法之间的信息流动,我们建议将MDPs的过渡功能功能化功能,通过VLBMBMA模型的初始定位,我们还引入了VLBE模型的深度结构。我们从O-BE的深度评估了VBE的深度分析结果。