We present the OMG-CMDP! algorithm for regret minimization in adversarial Contextual MDPs. The algorithm operates under the minimal assumptions of realizable function class and access to online least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient online regression oracles), simple and robust to approximation errors. It enjoys an $\widetilde{O}(H^{2.5} \sqrt{ T|S||A| ( \mathcal{R}(\mathcal{O}) + H \log(\delta^{-1}) )})$ regret guarantee, with $T$ being the number of episodes, $S$ the state space, $A$ the action space, $H$ the horizon and $\mathcal{R}(\mathcal{O}) = \mathcal{R}(\mathcal{O}_{\mathrm{sq}}^\mathcal{F}) + \mathcal{R}(\mathcal{O}_{\mathrm{log}}^\mathcal{P})$ is the sum of the regression oracles' regret, used to approximate the context-dependent rewards and dynamics, respectively. To the best of our knowledge, our algorithm is the first efficient rate optimal regret minimization algorithm for adversarial CMDPs that operates under the minimal standard assumption of online function approximation.
翻译:我们展示了 OMG- CMDP! 用于在对抗性环境 MDP 中将遗憾最小化的 OMG- CMDP 算法。 该算法在可实现功能类和访问在线最小正方和日志损失回归或奇迹的最低假设下运行。 我们的算法是高效的( 假设有效的在线回归或回归器 ), 简单而稳健的, 近似错误 。 它拥有一个 $\ 广度{ O} (H\ 2.5} \ qrt{ T<unk> S<unk> A} (\ mathcal{ (mathcal{ { scal{ { ) + H\ log (\ delta}-1}}}} 美元令人遗憾的保证, 美元是事件的数量, 美元的国家空间, $A$, 地平线和$\ mathaladal{R} 。 它拥有一个$\ mathltilde{ (mathcal) { (acal) { Ormal orizal- relifor lex colex, 也使用了我们最起码的正值 或正统值。</s>