The von Neumann-Morgenstern (VNM) utility theorem shows that under certain axioms of rationality, decision-making is reduced to maximizing the expectation of some utility function. We extend these axioms to increasingly structured sequential decision making settings and identify the structure of the corresponding utility functions. In particular, we show that memoryless preferences lead to a utility in the form of a per transition reward and multiplicative factor on the future return. This result motivates a generalization of Markov Decision Processes (MDPs) with this structure on the agent's returns, which we call Affine-Reward MDPs. A stronger constraint on preferences is needed to recover the commonly used cumulative sum of scalar rewards in MDPs. A yet stronger constraint simplifies the utility function for goal-seeking agents in the form of a difference in some function of states that we call potential functions. Our necessary and sufficient conditions demystify the reward hypothesis that underlies the design of rational agents in reinforcement learning by adding an axiom to the VNM rationality axioms and motivates new directions for AI research involving sequential decision making.
翻译:Von Neumann-Morgenstern (VNM) 的实用理论表明,在某些理性的轴心下,决策被缩减,以最大限度地实现对某种实用功能的期望。我们将这些轴心扩展至日益结构化的顺序决策设置,并确定相应的实用功能的结构。特别是,我们表明,无记忆的偏好导致一种效用,其形式是每次过渡奖励和对未来回报的倍增效应和倍增效应。这一结果促使Markov 决策程序(MDPs)在代理人回报的这个结构上普遍化,我们称之为Affie-Reward MDPs。我们需要对偏好加以更严格的限制,以恢复MDPs中通常使用的标度奖励累积总和。一个更强大的制约,以我们称之为潜在函数的某些函数的不同形式,简化了追求目标的剂的实用功能。我们必要和充分的条件使作为设计合理代理物基础的奖励假设变得模糊不清,通过在VNM理性轴轴中添加一个轴心和激励涉及连续决策的AI研究的新方向。