We present fictitious play dynamics for the general class of stochastic games and analyze its convergence properties in zero-sum stochastic games. Our dynamics involves agents forming beliefs on opponent strategy and their own continuation payoff (Q-function), and playing a myopic best response using estimated continuation payoffs. Agents update their beliefs at states visited from observations of opponent actions. A key property of the learning dynamics is that update of the beliefs on Q-functions occurs at a slower timescale than update of the beliefs on strategies. We show both in the model-based and model-free cases (without knowledge of agent payoff functions and state transition probabilities), the beliefs on strategies converge to a stationary mixed Nash equilibrium of the zero-sum stochastic game.
翻译:我们在零和零和随机游戏中为普通类游戏展示假游戏动态,并分析其趋同特性。我们的动态涉及代理人形成对对手策略和他们自己的继续支付(Q功能)的信念,利用估计的继续支付(Q功能)来发挥一种短视的最佳反应。代理人根据对对手动作的观察,在所访问的各州更新他们的信念。学习动态的一个关键特征是,更新关于“功能”的信念的时间范围比更新战略信念的时间范围要慢。我们在基于模型的和没有模型的案例中(不知道代理人的支付功能和状态过渡概率)都显示了关于战略的信念,这些信念与零和零和随机游戏的固定混合的“纳什平衡”一致。