Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in "observable" POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers.
翻译:大部分强化学习理论建立在计算上难以执行的神器之上。 具体地说,为了在部分可观测的Markov决策程序(POMDPs)中学习近乎最佳的政策,现有的算法要么需要对模型动态(例如确定性过渡)做出强有力的假设,要么假设有机会利用神器解决一个严酷乐观的规划或估计问题,将其作为子例。 在这项工作中,我们在合理的假设下为POMDPs开发了第一个无神器的学习算法。 具体地说,我们在“ 观察性” POMDPs中为学习提供了一种准极论时间端对端算法,在“ 观察性” POMDPs中,可观察性是一种假设,即对各州的分离分布导致对观察的分化分布。 我们的技术绕过了在不确定性下使用乐观原则促进探索的较传统方法,而是用新颖的无中心频带来构建政策覆盖。