Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode landmarks of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks that help tackle long-horizon and/or sparse reward tasks. We propose a formalism for further abstracting the subtask structure by endowing an RM with the ability to call other RMs, thus composing a hierarchy of RMs (HRM). We exploit HRMs by treating each call to an RM as an independently solvable subtask using the options framework, and describe a curriculum-based method to induce HRMs from example traces observed by the agent. Our experiments reveal that exploiting a handcrafted HRM leads to faster convergence than with a flat HRM, and that learning an HRM is more scalable than learning an equivalent flat HRM.
翻译:奖励机器(RMs)是最近的一种形式主义,它代表了强化学习任务的奖励功能,它通过一个有限的国家机器,其边缘利用高级别活动将任务的里程碑编码成一个标志。RMs的结构使任务分解成更简单、独立、可溶解的子任务,帮助解决长相和(或)微薄的奖励任务。我们提出一种形式主义,通过赋予一个RM以调用其他RM的能力来进一步抽取子任务结构,从而形成一个RM(HRM)的等级。我们利用HRM,利用选项框架将每个RM的电话作为独立可溶解的子任务来对待,并描述一种基于课程的方法,使HRM从代理人观察到的示例痕迹中引出。我们的实验表明,利用手工艺的HRM比一个平坦的HRM更快的融合,学习HRM比学习一个类似的平坦的HRM要大得多。