In problem-solving, we humans can come up with multiple novel solutions to the same problem. However, reinforcement learning algorithms can only produce a set of monotonous policies that maximize the cumulative reward but lack diversity and novelty. In this work, we address the problem of generating novel policies in reinforcement learning tasks. Instead of following the multi-objective framework used in existing methods, we propose to rethink the problem under a novel perspective of constrained optimization. We first introduce a new metric to evaluate the difference between policies and then design two practical novel policy generation methods following the new perspective. The two proposed methods, namely the Constrained Task Novel Bisector (CTNB) and the Interior Policy Differentiation (IPD), are derived from the feasible direction method and the interior point method commonly known in the constrained optimization literature. Experimental comparisons on the MuJoCo control suite show our methods can achieve substantial improvement over previous novelty-seeking methods in terms of both the novelty of policies and their performances in the primal task.
翻译:在解决问题的过程中,我们人类可以提出多种新颖的解决方案来解决同样的问题。然而,强化学习算法只能产生一套单项政策,使累积的奖励最大化,但缺乏多样性和新颖性。在这项工作中,我们处理在强化学习任务方面产生新政策的问题。我们建议不遵循现有方法中使用的多目标框架,而是从限制优化的新角度重新思考问题。我们首先采用新的衡量标准来评估政策之间的差异,然后根据新的视角设计两种实用的新政策生成方法。两种拟议方法,即Constraced Tork Nvel Bisectrica(CTNB)和Interrical Policy differation(IPD),都来自可行的方向方法和在限制优化文献中常见的内点方法。对 MuJoCo控制套件的实验性比较表明,我们的方法可以在政策的新颖性及其在原始任务中的表现方面大大改进以往的新式方法。