In many real-world applications, reinforcement learning (RL) agents might have to solve multiple tasks, each one typically modeled via a reward function. If reward functions are expressed linearly, and the agent has previously learned a set of policies for different tasks, successor features (SFs) can be exploited to combine such policies and identify reasonable solutions for new problems. However, the identified solutions are not guaranteed to be optimal. We introduce a novel algorithm that addresses this limitation. It allows RL agents to combine existing policies and directly identify optimal policies for arbitrary new problems, without requiring any further interactions with the environment. We first show (under mild assumptions) that the transfer learning problem tackled by SFs is equivalent to the problem of learning to optimize multiple objectives in RL. We then introduce an SF-based extension of the Optimistic Linear Support algorithm to learn a set of policies whose SFs form a convex coverage set. We prove that policies in this set can be combined via generalized policy improvement to construct optimal behaviors for any new linearly-expressible tasks, without requiring any additional training samples. We empirically show that our method outperforms state-of-the-art competing algorithms both in discrete and continuous domains under value function approximation.
翻译:在许多现实应用中, 强化学习( RL) 代理商可能必须解决多种任务, 每一个都是以奖赏功能为典型模式的。 如果奖赏功能是线性表达的, 并且该代理商以前已经为不同任务学习了一套政策, 后续特征( SF) 可以被利用来将这些政策结合起来, 并为新的问题找到合理的解决办法。 但是, 已经确定的解决办法不能保证是最佳的。 我们引入了一种解决这一限制的新型算法。 它允许RL代理商将现有政策结合起来, 并直接为任意的新问题确定最佳政策, 而不需要与环境进行任何进一步的互动。 我们首先( 在温和假设下)显示, SF 处理的转移学习问题相当于学习优化RL 中多重目标的问题。 然后我们引入了基于 SF 的“ 优化线性线性支持算法” 扩展, 来学习一套SF组成一个连接覆盖的套政策。 我们证明, 这套政策可以通过通用的政策改进来结合, 为任何新的直线性明确的任务构建最佳行为模式, 不需要任何额外的培训样本。 我们实验性地显示, 我们的方法在离 — 竞合式的域域法下持续地显示我们的方法超越了状态的国值。