Applications of Reinforcement Learning (RL), in which agents learn to make a sequence of decisions despite lacking complete information about the latent states of the controlled system, that is, they act under partial observability of the states, are ubiquitous. Partially observable RL can be notoriously difficult -- well-known information-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. Yet, this does not rule out the existence of large subclasses of POMDPs over which learning is tractable. In this paper we identify such a subclass, which we call weakly revealing POMDPs. This family rules out the pathological instances of POMDPs where observations are uninformative to a degree that makes learning hard. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee polynomial sample complexity. To the best of our knowledge, this is the first provably sample-efficient result for learning from interactions in overcomplete POMDPs, where the number of latent states can be larger than the number of observations.
翻译:强化学习(RL)应用中,代理机构在缺乏关于受控系统潜在状态的完整信息的情况下学会作出一系列决定,尽管缺乏关于受控系统潜在状态的完整信息,即他们在部分观察状态下采取行动,但这种应用无处不在。部分观察的RL可能是臭名昭著的困难 -- -- 众所周知的信息理论结果显示,学习部分可见的Markov决定过程(POMDPs)在最坏的情况下需要大量样本。然而,这并不排除大型子类POMDPs的学习是可移植的。在本文中,我们确定了这样一个子类,我们称之为微弱的披露POMDPs。这种家庭规则排除了POMDPs的病理学案例,在这些案例中,观察不具有一定的教益性,使得学习变得困难。我们证明,对于微弱地揭示POMDPs来说,将乐观和最大可能性刺激(MLE)结合起来的简单算法足以保证多元性抽样的复杂性。我们所了解的最好的是,这是从过分的POMDPs观测中学习出比潜在数量更大的互动的第一个具有节能效果的结果。