We study episodic two-player zero-sum Markov games (MGs) in the offline setting, where the goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori. When the dataset does not have uniform coverage over all policy pairs, finding an approximate NE involves challenges in three aspects: (i) distributional shift between the behavior policy and the optimal policy, (ii) function approximation to handle large state space, and (iii) minimax optimization for equilibrium solving. We propose a pessimism-based algorithm, dubbed as pessimistic minimax value iteration (PMVI), which overcomes the distributional shift by constructing pessimistic estimates of the value functions for both players and outputs a policy pair by solving NEs based on the two value functions. Furthermore, we establish a data-dependent upper bound on the suboptimality which recovers a sublinear rate without the assumption on uniform coverage of the dataset. We also prove an information-theoretical lower bound, which suggests that the data-dependent term in the upper bound is intrinsic. Our theoretical results also highlight a notion of "relative uncertainty", which characterizes the necessary and sufficient condition for achieving sample efficiency in offline MGs. To the best of our knowledge, we provide the first nearly minimax optimal result for offline MGs with function approximation.
翻译:在离线设置中,我们研究双玩家零和马可夫游戏(MGs),目的是根据先验收集的数据集找到近似纳什平衡(NE)政策配对。当数据集没有统一覆盖所有政策配对时,发现近NE涉及三个方面的挑战:(一)行为政策与最佳政策之间的分布变化,(二)处理大国家空间的功能近距离近距离处理,和(三)解决平衡的微缩最大优化。我们提议基于悲观的算法,称为悲观迷性微负值迭代法(PMVI),它通过根据两种价值函数解决 NE2的问题,克服分布上的转变。此外,我们根据亚优度设定一个数据上限,在不假设数据集的统一覆盖的情况下恢复亚线性比率。我们还证明了一个信息-理论下限,这表明数据依赖性术语的分布变化,在最接近尾端的基底线上,在最接近的基调的基调上,也是我们理论结果的顶端,在最优的基调的基调上,也显示我们最不具有内在的基调。