The ability to transfer in reinforcement learning is key towards building an agent of general artificial intelligence. In this paper, we consider the problem of learning to simultaneously transfer across both environments (ENV) and tasks (TASK), probably more importantly, by learning from only sparse (ENV, TASK) pairs out of all the possible combinations. We propose a novel compositional neural network architecture which depicts a meta rule for composing policies from the environment and task embeddings. Notably, one of the main challenges is to learn the embeddings jointly with the meta rule. We further propose new training methods to disentangle the embeddings, making them both distinctive signatures of the environments and tasks and effective building blocks for composing the policies. Experiments on GridWorld and Thor, of which the agent takes as input an egocentric view, show that our approach gives rise to high success rates on all the (ENV, TASK) pairs after learning from only 40% of them.
翻译:增强学习的转移能力是建立一般人工智能的媒介的关键。 在本文中,我们认为学习同时在两种环境(ENV)和任务(TASK)之间转移的问题,也许更为重要,从所有可能的组合中只学习零星的(ENV、TASK)对对子。我们提出了一个新的合成神经网络结构,它描绘了从环境和任务嵌入中整合政策的一个元规则。值得注意的是,主要挑战之一是学习与元规则共同嵌入。我们进一步提出了新的培训方法,以解开嵌入,使这些嵌入的环境和任务具有独特的特征,以及形成政策的有效构件。在GridWorld和Torwor的实验,该代理人将之视为一种自我中心观点的输入。我们的方法在从所有(ENV、TASK)配对中只学习了40%之后就产生了高的成功率。