In some puzzles, the strategy we need to use near the goal can be quite different from the strategy that is effective earlier on, e.g. due to a smaller branching factor near the exit state in a maze. A common approach in these cases is to apply both a forward and a backward search, and to try and align the two. In this work we propose an approach that takes this idea a step forward, within a reinforcement learning (RL) framework. Training a traditional forward-looking agent using RL can be difficult because rewards are often sparse, e.g. given only at the goal. Instead, we first train a backward-looking agent with a simple relaxed goal. We then augment the state representation of the puzzle with straightforward hint features that are extracted from the behavior of that agent. Finally, we train a forward looking agent with this informed augmented state. We demonstrate that this simple "access" to partial backward plans leads to a substantial performance boost. On the challenging domain of the Sokoban puzzle, our RL approach substantially surpasses the best learned solvers that generalize over levels, and is competitive with SOTA performance of the best highly-crafted solution. Impressively, we achieve these results while learning from only a small number of practice levels and using simple RL techniques.
翻译:在某些难题中,我们需要在接近目标时使用的战略可能与早先有效的战略大不相同,例如,由于在迷宫中离退出国很近的一个小分支因素,因此,我们需要使用接近目标的战略可能与早先有效的战略大不相同,例如,由于在迷宫中离退出国很近的一个小分支因素。在这些情况下,一个共同的方法是同时进行前向和后向搜索,并尝试和调整两者。在这项工作中,我们提出一种方法,在强化学习(RL)框架内,将这一想法向前迈出一步。在强化学习(RL)框架内,培训传统前瞻性的代理人可能很困难,因为奖励通常很少,例如只针对目标。相反,我们首先训练一个后向后看的代理人,有一个简单的放松目标。然后,我们用从该代理人的行为中提取的直截的提示性特征来增加解谜题。最后,我们用这种知情的扩展状态来培养前瞻性的代理人。我们证明,这种简单的“接触”部分后向后向计划带来巨大的业绩提升。在挑战的方面,我们的RL方法大大超出了一般级别上最精明的最精明的解的解的解的解方法,而只有从最简单的方法才能学习。