Markov chains are the de facto finite-state model for stochastic dynamical systems, and Markov decision processes (MDPs) extend Markov chains by incorporating non-deterministic behaviors. Given an MDP and rewards on states, a classical optimization criterion is the maximal expected total reward where the MDP stops after $T$ steps, which can be computed by a simple dynamic programming algorithm. We consider a natural generalization of the problem where the stopping times can be chosen according to a probability distribution, such that the expected stopping time is $T$, to optimize the expected total reward. Quite surprisingly we establish inter-reducibility of the expected stopping-time problem for Markov chains with the Positivity problem (which is related to the well-known Skolem problem), for which establishing either decidability or undecidability would be a major breakthrough. Given the hardness of the exact problem, we consider the approximate version of the problem: we show that it can be solved in exponential time for Markov chains and in exponential space for MDPs.
翻译:Markov 链条是事实上的随机动态系统有限状态模式,Markov 决策程序(MDPs)通过纳入非决定性行为来扩展Markov 链条。考虑到MDP和各州的奖赏,典型优化标准是MDP在$T 级后停止时的最大预期总报酬,可以通过简单的动态编程算法来计算。我们考虑对问题进行自然的概括化,根据概率分布选择停止时间,预期停止时间为$T,以优化预期的总报酬。非常令人惊讶的是,我们为Markov 链条设定了预期停止时间问题与概率问题(与众所周知的Skolem问题有关)之间的可减少性,对于这个问题,确定可衰减性或不可减性将是一个重大突破。鉴于确切问题的难度,我们考虑了问题的大致版本:我们显示,在Markov 链的指数时间和MDP的指数空间里,可以解决这个问题。