Partially Observable Markov Decision Processes (POMDPs) are notoriously hard to solve. Most advanced state-of-the-art online solvers leverage ideas of Monte Carlo Tree Search (MCTS). These solvers rapidly converge to the most promising branches of the belief tree, avoiding the suboptimal sections. Most of these algorithms are designed to utilize straightforward access to the state reward and assume the belief-dependent reward is nothing but expectation over the state reward. Thus, they are inapplicable to a more general and essential setting of belief-dependent rewards. One example of such reward is differential entropy approximated using a set of weighted particles of the belief. Such an information-theoretic reward introduces a significant computational burden. In this paper, we embed the paradigm of simplification into the MCTS algorithm. In particular, we present Simplified Information-Theoretic Particle Filter Tree (SITH-PFT), a novel variant to the MCTS algorithm that considers information-theoretic rewards but avoids the need to calculate them completely. We replace the costly calculation of information-theoretic rewards with adaptive upper and lower bounds. These bounds are easy to calculate and tightened only by the demand of our algorithm. Crucially, we guarantee precisely the same belief tree and solution that would be obtained by MCTS, which explicitly calculates the original information-theoretic rewards. Our approach is general; namely, any converging to the reward bounds can be easily plugged-in to achieve substantial speedup without any loss in performance.
翻译:部分可观察的 Markov 决策程序( POMDPs) 很难解决。 大多数最先进的最先进的在线解答者利用蒙特卡洛树搜索( MCTS) 的理念。 这些解答者迅速聚集到最有希望的信仰树分支, 避免亚优部分。 这些算法的设计大都是为了利用国家奖赏的直截了当的渠道, 并假定基于信仰的奖赏, 是对国家奖赏的期待。 因此, 它们无法适用于更加普遍和基本的基于信仰的奖赏设置。 这种奖赏的一个例子是使用一套信仰的加权粒子来粗略地估计信息奖赏。 这种信息理论奖赏会带来巨大的计算负担。 在本文中,我们把简化的范例嵌入到 MCTS 的算法中。 特别是, 我们提出简化的信息- 理论- 粒子过滤树( SIT-PFT) ( SIT), 是一个新的变体变体, 它会考虑信息- 理论奖赏, 但却不需要完全计算。 我们用适应性的上和低级的算法来计算, 简单的计算。 的计算, 我们的计算方法是简单的,, 只能被绑定的。