We consider the problem of learning a control policy that is robust against the parameter mismatches between the training environment and testing environment. We formulate this as a distributionally robust reinforcement learning (DR-RL) problem where the objective is to learn the policy which maximizes the value function against the worst possible stochastic model of the environment in an uncertainty set. We focus on the tabular episodic learning setting where the algorithm has access to a generative model of the nominal (training) environment around which the uncertainty set is defined. We propose the Robust Phased Value Learning (RPVL) algorithm to solve this problem for the uncertainty sets specified by four different divergences: total variation, chi-square, Kullback-Leibler, and Wasserstein. We show that our algorithm achieves $\tilde{\mathcal{O}}(|\mathcal{S}||\mathcal{A}| H^{5})$ sample complexity, which is uniformly better than the existing results by a factor of $|\mathcal{S}|$, where $|\mathcal{S}|$ is number of states, $|\mathcal{A}|$ is the number of actions, and $H$ is the horizon length. We also provide the first-ever sample complexity result for the Wasserstein uncertainty set. Finally, we demonstrate the performance of our algorithm using simulation experiments.
翻译:我们考虑的是针对培训环境与测试环境之间的参数不匹配而学习强有力控制政策的问题。 我们将此设计成一个分布性强强的强化学习(DR-RL)问题, 目的是学习一种政策, 在不确定性组中, 在环境最差的随机模型中, 使价值功能最大化。 我们注重于表格的外生学习环境, 算法可以在其中获取一个标称( 培训) 环境的基因化模型, 从而界定不确定性组。 我们建议采用robust Squald valual (RPVL) 算法, 解决由四个不同差异( 全部变异、 chi- quare、 Kullback- Leiber 和 Vasserstein) 定义的不确定性组的问题。 我们显示, 我们的算法实现了 $tilde xmassal cal{( macal{Smaxal{H} explaincludeal explical as the mainal resulational ex ex exal as expression.</s>