Learning in POMDPs is known to be significantly harder than MDPs. In this paper, we consider online learning problem for episodic POMDPs with unknown transition and observation models. We propose a Posterior Sampling-based reinforcement learning algorithm for POMDPs (PS4POMDPs), which is much simpler and more implementable compared to state-of-the-art optimism-based online learning algorithms for POMDPs. We show that the Bayesian regret of the proposed algorithm scales as the square root of the number of episodes, matching the lower bound, and is polynomial in the other parameters. In a general setting, its regret scales exponentially in the horizon length $H$, and we show that this is inevitable by providing a lower bound. However, when the POMDP is undercomplete and weakly revealing (an assumption common in recent literature), we establish a polynomial Bayesian regret bound. We also propose a posterior sampling algorithm for multi-agent POMDPs, and show it too has sublinear regret.
翻译:暂无翻译