Predictive state representations (PSRs) are models of controlled non-Markov observation sequences which exhibit the same generative process governing POMDP observations without relying on an underlying latent state. In that respect, a PSR is indistinguishable from the corresponding POMDP. However, PSRs notoriously ignore the notion of rewards, which undermines the general utility of PSR models for control, planning, or reinforcement learning. Therefore, we describe a sufficient and necessary accuracy condition which determines whether a PSR is able to accurately model POMDP rewards, we show that rewards can be approximated even when the accuracy condition is not satisfied, and we find that a non-trivial number of POMDPs taken from a well-known third-party repository do not satisfy the accuracy condition. We propose reward-predictive state representations (R-PSRs), a generalization of PSRs which accurately models both observations and rewards, and develop value iteration for R-PSRs. We show that there is a mismatch between optimal POMDP policies and the optimal PSR policies derived from approximate rewards. On the other hand, optimal R-PSR policies perfectly match optimal POMDP policies, reconfirming R-PSRs as accurate state-less generative models of observations and rewards.
翻译:受控的非马尔科夫观察序列模式(PSR)是受控非马尔科夫观察序列的模型,在不依赖潜在潜在潜在状态的情况下,显示POMDP观测的基因过程相同,在这方面,PSR与相应的POMDP是无法区分的。然而,PSR臭名昭著地忽视了奖励概念,这破坏了PSR模式在控制、规划或强化学习方面的普遍效用。因此,我们描述了一个充分和必要的准确性条件,它决定了PSR是否能够准确模拟POMDP的奖励。我们表明,即使准确性条件不令人满意,奖励也可以接近POMDP的,我们发现从众所周知的第三方储存处取出的POMDP非三进制数目并不满足准确性条件。我们提议奖励-预期状态说明(R-PSR),对PSR的模型进行精确性模拟,为R-OMDP的最佳政策与最优性R-PDP的R-PSR政策完美匹配。