This paper proposes a finite-horizon approximation scheme and introduces episodic equilibrium as a solution concept for stochastic games (SGs), where agents strategize based on the current state and episode stage. The paper also establishes an upper bound on the approximation error that decays with the episode length for both discounted and time-averaged utilities. This approach bridges the gap in the analysis of finite and infinite-horizon SGs, and provides a unifying framework to address time-averaged and discounted utilities. To show the effectiveness of the scheme, the paper presents episodic, decentralized (i.e., payoff-based), and model-free learning dynamics proven to reach (near) episodic equilibrium in broad classes of SGs, including zero-sum, identical-interest and specific general-sum SGs with switching controllers for both time-averaged and discounted utilities.
翻译:暂无翻译