Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states. The successor representation (SR), which measures the expected cumulative, discounted state occupancy under a fixed policy, enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity. However, in the real world, rewards may move or only be available for consumption once, may shift location, or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons. In such cases, the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest, rather than how often it should expect to visit them over a potentially infinite time span. To reflect such demands, we introduce the first-occupancy representation (FR), which measures the expected temporal discount to the first time a state is accessed. We demonstrate that the FR facilitates the selection of efficient paths to desired states, allows the agent, under certain conditions, to plan provably optimal trajectories defined by a sequence of subgoals, and induces similar behavior to animals avoiding threatening stimuli.
翻译:动物和人造物剂都得益于国家代表,这种国家代表可以支持快速转让跨任务学习,并使他们能够有效地穿越环境,到达得益国家。 后续代表(SR)衡量根据固定政策预期累积的、折扣的国家占用率,能够有效地转移到另一个稳定的马尔科维环境的不同奖赏结构,并被假定为生物行为和神经活动的基本方面。然而,在现实世界中,奖励可以移动,或者只提供一次消费,可以转移地点,或者代理人可能仅仅旨在尽可能快地达到目标国家,而不受人为强加的任务范围的限制。在这种情况下,最与行为相关的代表(SR)将包含关于代理人可能首先到达利益国时的信息,而不是它预期在可能无限的时间内访问它们的次数。为了反映这种需求,我们引入了首次使用代表(FR),它衡量预期的时间折扣到第一次到达状态时。我们证明,联邦可以便利选择通往理想状态的有效途径,允许代理人在特定条件下规划最理想的轨迹,从而避免由次目标设定的最优的动物诱导力。