This work studies the question of Representation Learning in RL: how can we learn a compact low-dimensional representation such that on top of the representation we can perform RL procedures such as exploration and exploitation, in a sample efficient manner. We focus on the low-rank Markov Decision Processes (MDPs) where the transition dynamics correspond to a low-rank transition matrix. Unlike prior works that assume the representation is known (e.g., linear MDPs), here we need to learn the representation for the low-rank MDP. We study both the online RL and offline RL settings. For the online setting, operating with the same computational oracles used in FLAMBE (Agarwal et.al), the state-of-art algorithm for learning representations in low-rank MDPs, we propose an algorithm REP-UCB Upper Confidence Bound driven Representation learning for RL), which significantly improves the sample complexity from $\widetilde{O}( A^9 d^7 / (\epsilon^{10} (1-\gamma)^{22}))$ for FLAMBE to $\widetilde{O}( A^2 d^4 / (\epsilon^2 (1-\gamma)^{5}) )$ with $d$ being the rank of the transition matrix (or dimension of the ground truth representation), $A$ being the number of actions, and $\gamma$ being the discounted factor. Notably, REP-UCB is simpler than FLAMBE, as it directly balances the interplay between representation learning, exploration, and exploitation, while FLAMBE is an explore-then-commit style approach and has to perform reward-free exploration step-by-step forward in time. For the offline RL setting, we develop an algorithm that leverages pessimism to learn under a partial coverage condition: our algorithm is able to compete against any policy as long as it is covered by the offline distribution.
翻译:这项工作研究RL的“ 代表学习” 问题: 我们怎样才能学习一个低层次的“ 代表” 问题 : 我们怎样才能学习一个低层次的“ 代表” 问题, 这样在代表之外, 我们还可以以抽样效率的方式执行“ 代表” 程序。 我们注重的是低层次的 Markov 决策程序(MDPs), 其过渡动态与低层次的过渡矩阵相对应。 与先前假定代表的工程不同( 例如线性 MDPs), 我们需要学习低层次的 MDP 的“ 代表 ” 。 我们研究在线的 RL 和 离线的 RLL 设置。 对于在线的设置, 与FAMB(Agarwal etal 等) 使用的相同的计算或计算程序, 以低层次的“ 马克” 代表为“ 代表 ” 。 我们的“ 比例”, 从全层次的“ ” (A% 7 d) / (\ l) 平级的“ 平级的“ 平级 ” 平级( ) ) 平级的“ 平级” ) 和“平级的“平级的“平级” 平级的“平级” 平级的“平级” 平级”, 平级的“平级” 是“平级” 的“平级” 的“平级” 平级的“平级”, 的“平级的“平级的“平级” 平级” 平级” 的“平级” 。