Reinforcement learning has been shown to perform a range of complex tasks through interaction with an environment or collected leveraging experience. However, many of these approaches presume optimal or near optimal experiences or the presence of a consistent environment. In this work we propose dual, advantage-based behavior policy based on counterfactual regret minimization. We demonstrate the flexibility of this approach and how it can be adapted to online contexts where the environment is available to collect experiences and a variety of other contexts. We demonstrate this new algorithm can outperform several strong baseline models in different contexts based on a range of continuous environments. Additional ablations provide insights into how our dual behavior regularized reinforcement learning approach is designed compared with other plausible modifications and demonstrates its ability to generalize.
翻译:强化学习表明,通过与环境的互动或收集利用经验,强化学习可以完成一系列复杂的任务。但是,许多这些方法假定最佳或接近最佳经验或存在一致的环境。在这项工作中,我们提出了基于反事实最小化的基于优势的双重行为政策。我们展示了这一方法的灵活性,以及如何将其适应在线环境,环境可以用来收集经验和各种其他背景。我们展示了这一新算法在不同背景下基于一系列连续环境的优于若干强势基线模型。其他推理提供了我们双重行为正规化强化学习方法的设计如何与其他可信的修改相比较,并展示了其概括化能力。