Learning a good state representation is a critical skill when dealing with multiple tasks in Reinforcement Learning as it allows for transfer and better generalization between tasks. However, defining what constitute a useful representation is far from simple and there is so far no standard method to find such an encoding. In this paper, we argue that distillation -- a process that aims at imitating a set of given policies with a single neural network -- can be used to learn a state representation displaying favorable characteristics. In this regard, we define three criteria that measure desirable features of a state encoding: the ability to select important variables in the input space, the ability to efficiently separate states according to their corresponding optimal action, and the robustness of the state encoding on new tasks. We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem, before extending our analysis on more complex visual tasks from the Atari and Procgen benchmarks.
翻译:学习良好的国家代表制是处理强化学习的多重任务的关键技能,因为它允许转让和更好地概括任务。然而,界定什么是有用的代表制远非简单,迄今为止没有标准的方法来找到这样的编码。 在本文中,我们争辩说,蒸馏法 -- -- 旨在以单一神经网络来模仿一套特定政策的过程 -- -- 可以用来学习显示有利特点的国家代表制。在这方面,我们确定了衡量国家编码的可取特征的三个标准:在投入空间中选择重要变量的能力,根据相应的最佳行动有效区分国家的能力,以及国家对新任务的严格编码。我们首先评估这些标准,并核实在根据标准反转的钟头问题,在扩大我们对从阿塔里和普罗根基准中更复杂的视觉任务的分析之前,在重温环境中对国家代表制成的国家代表制作精炼的贡献。