Conducting additional search during test time is often used to improve the performance of reinforcement learning algorithms. Performing search in adversarial games with imperfect information is notoriously difficult and often requires a complicated training process. We present an algorithm that uses an arbitrary policy-gradient algorithm that learns from sampled trajectories in the setting of fully adversarial two-player games with imperfect information. Alongside the training of the policy network, the algorithm trains an additional critic network, which provides multiple expected values if both players follow one of a fixed set of transformations of the policy given by the policy network. These values are then used for depth-limited search. We show how the values from this critic can create a value function for imperfect information games. Moreover, they can be used to compute the summary statistics necessary to start the search from an arbitrary decision point in the game. The presented algorithm is scalable to very large games since it does not require any search in the training time. Furthermore, given sufficient computational resources, our algorithm may choose whether to use search or play the strategy according to the trained policy network anywhere in the game. We evaluate the algorithm's performance when trained alongside Regularized Nash Dynamics, and we compare the performance of using the search against the policy network in the standard benchmark game of Leduc hold'em, multiple variants of imperfect information Goofspiel, and in a game of Battleships.
翻译:暂无翻译