We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on $t_\mathrm{mix}$ is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.
翻译:我们证明,在获得基因模型的情况下,在找到一个无限和偏差平均回报马可夫决定程序(MDP)的 $- epsilon $- optal- reward Markov 决策程序(MDP) 的样本复杂性方面,我们提供了新的和新的下限。当所有政策概率转换矩阵的混合时间最多为$t ⁇ mathrm{mix}$时,我们提供了一种算法来解决问题,它使用每对州行动样本的$- ipslon{m}\ epsilon {-3} (oblicous) $ (oblicous) 样本。此外,我们提供了一种较低的下限法则显示,在最坏的情况下,任何计算模糊的样本的算法都需要对 $$t ⁇ mathrm{mix} 进行线性依赖。 我们通过建立无限和平均偏差的MDP之间的连接来获得我们的结果。