Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
翻译:行为限制政策优化已被证明是解决离线强化学习的成功范例。 通过利用历史转型,一项政策经过培训,最大限度地实现学习价值功能,同时受到行为政策的制约,以避免重大分配转变。在本文中,我们建议采用封闭式政策改进操作员。我们新颖地指出,行为限制自然会推动使用一阶泰勒近似,导致政策目标的线性近似。此外,由于实际的数据集通常由多种政策收集,我们把行为政策模拟为高山混合体,并通过利用LogSumExt的较低约束和Jensen的不平等来克服引致优化的困难,从而形成一个封闭式政策改进操作员。我们与我们的新政策改进操作员一起即时脱线的RL算法,并用经验来证明它们相对于标准的D4RL基准上的最新算法的有效性。