We introduce and initiate the study of a natural class of repeated two-player matrix games, called Repeated-Until-Collision (RUC) games. In each round, both players simultaneously pick an action from a common action set $\{1, 2, \dots, n\}$. Depending on their chosen actions, they derive payoffs given by $n \times n$ matrices $A$ and $B$, respectively. If their actions collide (i.e., they pick the same action), the game ends, otherwise, it proceeds to the next round. Both players want to maximize their total payoff until the game ends. RUC games can be interpreted as pursuit-evasion games or repeated hide-and-seek games. They also generalize hand cricket, a popular game among children in India. We show that under mild assumptions on the payoff matrices, every RUC game admits a Nash equilibrium (NE). Moreover, we show the existence of a stationary NE, where each player chooses their action according to a probability distribution over the action set that does not change across rounds. Remarkably, we show that all NE are effectively the same as the stationary NE, thus showing that RUC games admit an almost unique NE. Lastly, we also show how to compute (approximate) NE for RUC games.
翻译:暂无翻译