Reinforcement learning (RL) is a central problem in artificial intelligence. This problem consists of defining artificial agents that can learn optimal behaviour by interacting with an environment -- where the optimal behaviour is defined with respect to a reward signal that the agent seeks to maximize. Reward machines (RMs) provide a structured, automata-based representation of a reward function that enables an RL agent to decompose an RL problem into structured subproblems that can be efficiently learned via off-policy learning. Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems. We pose the task of learning RMs as a discrete optimization problem where the objective is to find an RM that decomposes the problem into a set of subproblems such that the combination of their optimal memoryless policies is an optimal policy for the original problem. We show the effectiveness of this approach on three partially observable domains, where it significantly outperforms A3C, PPO, and ACER, and discuss its advantages, limitations, and broader potential.
翻译:强化学习(RL)是人工智能中的一个中心问题。这个问题包括:确定能够通过与环境互动学习最佳行为的人工剂 -- -- 最佳行为是针对该代理人力求最大化的奖赏信号而确定的。奖励机器(RMs)提供一种有条理的、基于自动的奖赏功能代表,使该代理人能够将一个RL问题分解成结构化的子问题,通过离政策学习可以有效地学到。我们在这里表明,可以从经验中学习RMs,而不是由用户指定,由此产生的问题分解可以用来有效解决部分可见的RL问题。我们把学习RMs的任务作为一个离散的优化问题提出。我们的目标是找到一个将问题分解成一组子问题的RM(RM),以便将其最佳的不记忆政策结合成为解决原始问题的最佳政策。我们展示了这种方法在三个部分可观测领域的有效性,大大超出A3C、PPO和ACER, 并讨论其优势、局限性和更广泛的潜力。