We consider the popular tree-based search strategy within the framework of reinforcement learning, the Monte Carlo Tree Search (MCTS), in the context of finite-horizon Markov decision process. We propose a dynamic sampling tree policy that efficiently allocates limited computational budget to maximize the probability of correct selection of the best action at the root node of the tree. Experimental results on Tic-Tac-Toe and Gomoku show that the proposed tree policy is more efficient than other competing methods.
翻译:我们认为,在加强学习的框架内,蒙特卡洛树搜索(MCTS)是流行的植树搜索战略,这是在有限的Horizon Markov决定程序的范围内。我们提议了一项动态采样树政策,有效地分配有限的计算预算,以最大限度地提高在树根节点正确选择最佳行动的概率。Tic-Tac-Toe和Gomoko的实验结果表明,拟议的树政策比其他相互竞争的方法更有效。