Online interactions with the environment to collect data samples for training a Reinforcement Learning (RL) agent is not always feasible due to economic and safety concerns. The goal of Offline Reinforcement Learning is to address this problem by learning effective policies using previously collected datasets. Standard off-policy RL algorithms are prone to overestimations of the values of out-of-distribution (less explored) actions and are hence unsuitable for Offline RL. Behavior regularization, which constraints the learned policy within the support set of the dataset, has been proposed to tackle the limitations of standard off-policy algorithms. In this paper, we improve the behavior regularized offline reinforcement learning and propose BRAC+. First, we propose quantification of the out-of-distribution actions and conduct comparisons between using Kullback-Leibler divergence versus using Maximum Mean Discrepancy as the regularization protocol. We propose an analytical upper bound on the KL divergence as the behavior regularizer to reduce variance associated with sample based estimations. Second, we mathematically show that the learned Q values can diverge even using behavior regularized policy update under mild assumptions. This leads to large overestimations of the Q values and performance deterioration of the learned policy. To mitigate this issue, we add a gradient penalty term to the policy evaluation objective. By doing so, the Q values are guaranteed to converge. On challenging offline RL benchmarks, BRAC+ outperforms the baseline behavior regularized approaches by 40%~87% and the state-of-the-art approach by 6%.
翻译:由于经济和安全方面的考虑,与环境进行在线互动以收集培训强化学习(RL)剂的数据样本并非总可行,因为经济和安全方面的考虑。“离线强化学习”的目标是通过利用以前收集的数据集学习有效的政策来解决这一问题。标准离政策RL算法容易过高估计分配外(不探索)行动的价值,因此不适合离线RL(行为规范化),这限制了数据集支持组内部的学习政策,因此提议了“强化学习”规范化,以解决标准离政策算法的局限性。在本文中,我们改进了定期离线行为强化学习,并提出了“BRAC++”。首先,我们建议量化分配外行动,并在使用“Kullback-Leable”差异和“最大平均值差异”作为正规化协议之间进行比较。我们提议了“KL”差异分析上限,作为行为规范化因素,以减少与抽样估算相关的差异。第二,我们用数学方法显示,所学的“Q”值即使使用在温和假设下定期更新的行为化政策,也可能有“BRA+”值。这将导致大幅的绩效评估质量问题。