In increasingly different contexts, it happens that a human player has to interact with artificial players who make decisions following decision-making algorithms. How should the human player play against these algorithms to maximize his utility? Does anything change if he faces one or more artificial players? The main goal of the paper is to answer these two questions. Consider n-player games in normal form repeated over time, where we call the human player optimizer, and the (n -- 1) artificial players, learners. We assume that learners play no-regret algorithms, a class of algorithms widely used in online learning and decision-making. In these games, we consider the concept of Stackelberg equilibrium. In a recent paper, Deng, Schneider, and Sivan have shown that in a 2-player game the optimizer can always guarantee an expected cumulative utility of at least the Stackelberg value per round. In our first result, we show, with counterexamples, that this result is no longer true if the optimizer has to face more than one player. Therefore, we generalize the definition of Stackelberg equilibrium introducing the concept of correlated Stackelberg equilibrium. Finally, in the main result, we prove that the optimizer can guarantee at least the correlated Stackelberg value per round. Moreover, using a version of the strong law of large numbers, we show that our result is also true almost surely for the optimizer utility instead of the optimizer's expected utility.
翻译:在日益不同的环境下,人玩家碰巧要与根据决策算法作出决定的人工玩家互动。 人类玩家应该如何与这些算法对抗, 以尽量扩大他的效用? 如果面对一个或更多个人工玩家, 是否有什么变化? 论文的主要目标是回答这两个问题。 考虑正常形式的玩家游戏, 我们称之为人玩家优化者, 和( -- -- 1) 人工玩家, 学习者。 我们假设, 学习者玩不回报算法, 这是一种在网上学习和决策中广泛使用的算法。 在这些游戏中, 我们应该考虑Stackelberg 平衡的概念。 在最近的报纸上, Deng, Schneider 和 Sivan 显示, 在2个玩家游戏中, 优化者总是能保证至少连续重复Stackelberg 值的预期累积效用。 我们的第一个结果是, 如果最优化者必须面对的比一个玩家更多。 因此, 我们概括了Stacelberg 平衡的定义, 引入了Stackelberg 最小的效用概念, 最后, 我们也可以证明我们最强烈的预期的Stacel 水平 。