To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress in state abstraction, but, although the theory of time abstraction has been extensively developed based on the options framework, in practice options have rarely been used in planning. One reason for this is that the space of possible options is immense and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks such as reaching a bottleneck state, or maximizing a sensory signal other than the reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. The subtasks proposed in most previous work ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option stops. We show that options and option models obtained from such reward-respecting subtasks are much more likely to be useful in planning and can be learned online and off-policy using existing learning algorithms. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how the algorithms for learning values, policies, options, and models can be unified using general value functions.
翻译:为了实现人工智能的宏伟目标,强化学习必须包括以抽象的状态和时间抽象的世界模式进行规划。深层学习在州抽象方面有所进展,但尽管根据选项框架广泛发展了时间抽象理论,但实际上却很少在规划中采用备选方案。原因之一是,可能选项的空间很大,先前提出的选项发现方法没有考虑到在规划中如何使用选项模型。通常通过提出辅助任务发现选项,如达到瓶颈状态或尽可能扩大非奖赏的感官信号等。每个子任务都解决了生成一个选项,然后将一个选项的模式学习并提供给规划进程。大多数先前工作中提出的子任务忽视了对最初问题的奖励,而我们提出的子任务是利用最初的奖励加上基于选项在选项停止时状态特征的奖励。我们表明,从这种奖赏性子任务中获得的选项和选项模式在规划中非常可能有用,并且可以学习在线和离政策选项,然后将一个模式模型传授给规划过程,然后学习,然后将这一选项的模型提供给规划过程。在大多数工作中提出的子任务忽略了对原始问题的奖赏和奖赏,同时,我们用现有的空间发现方法的学习选择,最后选择,我们如何学习空间的排序。