Aiming to produce reinforcement learning (RL) policies that are human-interpretable and can generalize better to novel scenarios, Trivedi et al. (2021) present a method (LEAPS) that first learns a program embedding space to continuously parameterize diverse programs from a pre-generated program dataset, and then searches for a task-solving program in the learned program embedding space when given a task. Despite encouraging results, the program policies that LEAPS can produce are limited by the distribution of the program dataset. Furthermore, during searching, LEAPS evaluates each candidate program solely based on its return, failing to precisely reward correct parts of programs and penalize incorrect parts. To address these issues, we propose to learn a meta-policy that composes a series of programs sampled from the learned program embedding space. By composing programs, our proposed method can produce program policies that describe out-of-distributionally complex behaviors and directly assign credits to programs that induce desired behaviors. We design and conduct extensive experiments in the Karel domain. The experimental results show that our proposed framework outperforms baselines. The ablation studies confirm the limitations of LEAPS and justify our design choices.
翻译:为了产生强化学习(RL)政策,Livedi等人(2021年)提出一种方法(LEAPS),首先学习一个嵌入空间的方案,从预先生成的程序数据集中持续参数化不同程序,然后在学习的方案中寻找任务解决程序,在任务完成时嵌入空间。尽管取得了令人鼓舞的结果,LEAPS能够产生的方案政策受到方案数据集分布的限制。此外,在搜索期间,LEAPS仅根据返回的情况对每个候选方案进行评估,未能准确奖励方案中正确的部分并惩罚错误的部分。为了解决这些问题,我们提议学习一套元政策,从学习的方案嵌入一系列方案,从嵌入空间。通过实施程序,我们拟议的方法可以产生描述分配方面复杂的行为的方案政策,并直接将信用分配给能够产生理想行为的方案。我们在Karel域设计和进行广泛的实验。实验结果显示,我们提议的框架超出了基准值。