Reinforcement learning (RL) agents can leverage batches of previously collected data to extract a reasonable control policy. An emerging issue in this offline RL setting, however, is that the bootstrapping update underlying many of our methods suffers from insufficient action-coverage: standard max operator may select a maximal action that has not been seen in the dataset. Bootstrapping from these inaccurate values can lead to overestimation and even divergence. There are a growing number of methods that attempt to approximate an \emph{in-sample} max, that only uses actions well-covered by the dataset. We highlight a simple fact: it is more straightforward to approximate an in-sample \emph{softmax} using only actions in the dataset. We show that policy iteration based on the in-sample softmax converges, and that for decreasing temperatures it approaches the in-sample max. We derive an In-Sample Actor-Critic (AC), using this in-sample softmax, and show that it is consistently better or comparable to existing offline RL methods, and is also well-suited to fine-tuning.
翻译:强化学习( RL) 代理商可以利用先前收集的成批数据来获取合理的控制政策。 然而,这个离线 RL 设置中出现的一个新问题是,我们许多方法背后的靴子更新缺乏足够的行动覆盖:标准最大操作员可以选择数据集中未见的最大动作。 从这些不准确的值中推移可能会导致高估甚至差异。 越来越多的方法试图接近一个只使用由数据集很好覆盖的动作的 emph{in- sample最大 。 我们强调一个简单的事实: 仅使用数据集中的动作来接近一个内模 \ emph{ softmax} 比较简单。 我们显示,基于内模软式组合的重复政策, 并且为了降低温度, 它会接近内模量的最大。 我们用这个模量软式的动作(AC), 并显示它比现有的离线RL 方法更好或更相近。</s>