Offline reinforcement learning (RL) defines a sample-efficient learning paradigm, where a policy is learned from static and previously collected datasets without additional interaction with the environment. The major obstacle to offline RL is the estimation error arising from evaluating the value of out-of-distribution actions. To tackle this problem, most existing offline RL methods attempt to acquire a policy both ``close" to the behaviors contained in the dataset and sufficiently improved over them, which requires a trade-off between two possibly conflicting targets. In this paper, we propose a novel approach, which we refer to as adaptive behavior regularization (ABR), to balance this critical trade-off. By simply utilizing a sample-based regularization, ABR enables the policy to adaptively adjust its optimization objective between cloning and improving over the policy used to generate the dataset. In the evaluation on D4RL datasets, a widely adopted benchmark for offline reinforcement learning, ABR can achieve improved or competitive performance compared to existing state-of-the-art algorithms.
翻译:离线强化学习(RL)定义了一种抽样有效的学习模式,在这个模式中,一项政策是从静态和先前收集的数据集中学习的,而没有与环境进行更多的互动。脱线RL的主要障碍是评价分配外行动的价值时产生的估计错误。为了解决这一问题,大多数现有的离线强化学习方法试图获得一项既与数据集中包含的行为“接近”又充分改进的政策,这需要在两个可能相互冲突的目标之间进行权衡。在本文件中,我们提出了一个新颖的方法,我们称之为适应性行为规范(ABR),以平衡这一重要的权衡。光是利用基于抽样的正规化,ABR使该政策能够适应性调整克隆与改进生成数据集所用政策之间的优化目标。在D4RL数据集的评价中,ABR能够实现与现有最新算法相比的改进或竞争性业绩。