We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Every infinite run induces the following sequences of payoffs: 1. Point payoff (the sequence of directly seen transition rewards), 2. Mean payoff (the sequence of the sums of all rewards so far, divided by the number of steps), and 3. Total payoff (the sequence of the sums of all rewards so far). For each payoff type, the objective is to maximize the probability that the $\liminf$ is non-negative. We establish the complete picture of the strategy complexity of these objectives, i.e., how much memory is necessary and sufficient for $\varepsilon$-optimal (resp. optimal) strategies. Some cases can be won with memoryless deterministic strategies, while others require a step counter, a reward counter, or both.
翻译:我们用真实价值的过渡奖励来研究无穷无尽的马尔科夫决策程序(MDPs ) 。 每一场无限运行都会引发以下一系列的支付: 1. 点补偿( 直接看到过渡奖励的顺序 ), 2. 平均补偿( 迄今所有奖励的金额的顺序, 除以步骤数 ) ), 和 3. 全部补偿( 迄今所有奖励的金额的顺序 ) 。 对于每一种补偿类型, 目标是最大限度地增加美元为非负值的概率 。 我们确定了这些目标的战略复杂性的完整图景, 即, 多少记忆对于 $\ varepsilon$- pertimal( 最佳 ) 战略来说是必需的和足够的。 有些案例可以用没有记忆的确定性战略赢得, 而另一些则需要一步反弹, 奖赏反, 或者两者兼而有。