Incorporating prior knowledge in reinforcement learning algorithms is mainly an open question. Even when insights about the environment dynamics are available, reinforcement learning is traditionally used in a tabula rasa setting and must explore and learn everything from scratch. In this paper, we consider the problem of exploiting priors about action sequence equivalence: that is, when different sequences of actions produce the same effect. We propose a new local exploration strategy calibrated to minimize collisions and maximize new state visitations. We show that this strategy can be computed at little cost, by solving a convex optimization problem. By replacing the usual epsilon-greedy strategy in a DQN, we demonstrate its potential in several environments with various dynamic structures.
翻译:将先前的知识纳入强化学习算法中,这主要是个未决问题。即使有关于环境动态的洞察力,强化学习也传统上用于塔布拉拉马萨环境中,必须从零开始探索和学习一切。在本文中,我们考虑了利用行动序列等同的前科的问题:即当不同的行动序列产生同样的效果时。我们提出了一个新的本地探索战略,以尽量减少碰撞和最大限度地增加新的国家访问。我们表明,这一战略可以通过解决二次曲线优化问题,以低成本来计算。通过在DQN中取代通常的epsilon-greedy战略,我们以各种动态结构来展示其在若干环境中的潜力。