We study a novel setting in Online Markov Decision Processes (OMDPs) where the loss function is chosen by a non-oblivious strategic adversary who follows a no-external regret algorithm. In this setting, we first demonstrate that MDP-Expert, an existing algorithm that works well with oblivious adversaries can still apply and achieve a policy regret bound of $\mathcal{O}(\sqrt{T \log(L)}+\tau^2\sqrt{ T \log(|A|)})$ where $L$ is the size of adversary's pure strategy set and $|A|$ denotes the size of agent's action space. Considering real-world games where the support size of a NE is small, we further propose a new algorithm: MDP-Online Oracle Expert (MDP-OOE), that achieves a policy regret bound of $\mathcal{O}(\sqrt{T\log(L)}+\tau^2\sqrt{ T k \log(k)})$ where $k$ depends only on the support size of the NE. MDP-OOE leverages the key benefit of Double Oracle in game theory and thus can solve games with prohibitively large action space. Finally, to better understand the learning dynamics of no-regret methods, under the same setting of no-external regret adversary in OMDPs, we introduce an algorithm that achieves last-round convergence result to a NE. To our best knowledge, this is first work leading to the last iteration result in OMDPs.
翻译:我们研究了在线 Markov 决策进程( OMDPs) 的新设置, 损失函数由非显眼的战略对手选择, 后者遵循的是非外部的遗憾算法。 在此环境下, 我们首先证明 MDP- Expert, 一种与不明对手运作良好的现有算法, 仍然可以应用, 并实现一个政策后悔 $\ mathcal{O} (sqrt{T\taú2\\\ sqrt{ T\log( ⁇ A ⁇ }} ) 。 其中, 美元是对手的纯策略的大小, 美元表示代理动作空间的大小 。 考虑到真实世界的游戏, 其中NEE的支持规模很小, 我们进一步提出一个新的算法: MDP- Oracle 专家( MDP- Olocle ), 实现的是 $\\\\ tqral{ t\ log} 的政策遗憾 。 ialliversal2\\ sralt{ T klog} =T\ klog 美元, $A_ molog do dress Oral ligle macle macle max le macle max 。