Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives, or constraints, in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors when learning from experts or in offline RL. Often, task reward and auxiliary objectives are in conflict with each other and it is therefore natural to treat these examples as instances of multi-objective (MO) optimization problems. We study the principles underlying MORL and introduce a new algorithm, Distillation of a Mixture of Experts (DiME), that is intuitive and scale-invariant under some conditions. We highlight its strengths on standard MO benchmark problems and consider case studies in which we recast offline RL and learning from experts as MO problems. This leads to a natural algorithmic formulation that sheds light on the connection between existing approaches. For offline RL, we use the MO perspective to derive a simple algorithm, that optimizes for the standard RL objective plus a behavioral cloning term. This outperforms state-of-the-art on two established offline RL benchmarks.
翻译:许多进展提高了深强化学习(RL)算法的稳健性和效率,这些进展可以以某种方式理解为在政策优化步骤中引入额外目标或制约因素,其中包括探索奖金、对子正规化和在专家或离线RL学习时对教师或数据前期的规范化等想法。任务奖励和辅助目标往往相互冲突,因此将这些例子作为多目标优化问题的例子处理是很自然的。我们研究MOL所依据的原则,并引入一种新的算法,即在某些条件下对专家混合进行直观和规模变异的筛选。我们强调其在标准MO基准问题上的优势,并将我们重新将RL从网上删除和从专家那里学习作为MO问题的案例研究考虑在内。这导致一种自然算法的提法,揭示了现有方法之间的联系。对于离线RL,我们利用MO的观点来得出一种简单的算法,即对标准RL目标进行优化,并在两个条件下对行为克隆术语进行优化。我们强调它在标准MO基准上的优势,并考虑在标准MO基准下进行案例研究。我们重新将RL向专家学习作为MO的问题。这导致一种自然算法的提法,可以说明现有方法之间的联系。在离线上,我们从离线上利用MO的观点来得出一种简单的算法,对标准目标进行优化,对标准目标和行为克隆术语。