Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions. In these situations, a good strategy is to focus on exploration, hopefully leading to the discovery of a reward signal to improve on. A learning algorithm capable of dealing with this kind of settings has to be able to (1) explore possible agent behaviors and (2) exploit any possible discovered reward. Efficient exploration algorithms have been proposed that require to define a behavior space, that associates to an agent its resulting behavior in a space that is known to be worth exploring. The need to define this space is a limitation of these algorithms. In this work, we introduce STAX, an algorithm designed to learn a behavior space on-the-fly and to explore it while efficiently optimizing any reward discovered. It does so by separating the exploration and learning of the behavior space from the exploitation of the reward through an alternating two-steps process. In the first step, STAX builds a repertoire of diverse policies while learning a low-dimensional representation of the high-dimensional observations generated during the policies evaluation. In the exploitation step, emitters are used to optimize the performance of the discovered rewarding solutions. Experiments conducted on three different sparse reward environments show that STAX performs comparably to existing baselines while requiring much less prior information about the task as it autonomously builds the behavior space.
翻译:在稀薄的奖励环境中,学习者很难学习最佳政策,因为学习者对其行动的质量几乎没有到任何反馈。在这种情况下,一个良好的战略是侧重于探索,希望最终发现奖励信号,从而得到改进。一个能够处理这种环境的学习算法必须能够(1) 探索可能的代理人行为,(2) 利用任何可能发现的奖励。已经提出了高效的探索算法,需要界定一个行为空间,该算法与其代理人在已知值得探索的空间中由此产生的行为相关联。需要界定这一空间是这些算法的局限性。在这项工作中,我们采用了STAX,这是一种旨在学习飞行中的行为空间并探索并有效优化所发现的任何奖励信号的算法。通过一个交替的两步进程将行为空间的探索和学习与利用所发现的任何奖励加以区分。在第一步,STAX建立一套多样性政策的复集,同时学习在政策评价期间产生的高维度观测的低维度代表。在开发步骤中,利用STAX的算法来优化所发现的空间基线表现。