The low rank MDP has emerged as an important model for studying representation learning and exploration in reinforcement learning. With a known representation, several model-free exploration strategies exist. In contrast, all algorithms for the unknown representation setting are model-based, thereby requiring the ability to model the full dynamics. In this work, we present the first model-free representation learning algorithms for low rank MDPs. The key algorithmic contribution is a new minimax representation learning objective, for which we provide variants with differing tradeoffs in their statistical and computational properties. We interleave this representation learning step with an exploration strategy to cover the state space in a reward-free manner. The resulting algorithms are provably sample efficient and can accommodate general function approximation to scale to complex environments.
翻译:低级别 MDP 已成为学习强化学习中代表制学习和探索的重要模式。 以已知代表制, 存在若干无模式的探索战略。 相反, 未知代表制的所有算法都是基于模型的, 从而要求有能力模拟整个动态。 在这项工作中, 我们为低级别 MDP 展示了第一个无模式代表制学习算法。 关键算法贡献是一个新的小型代表制学习目标, 我们为此提供了在统计和计算属性上进行不同权衡的变体。 我们将这一代表制学习与勘探战略相衔接, 以便以无报酬的方式覆盖国家空间。 由此产生的算法非常高效地抽样, 并且能够适应到复杂环境的普通功能近似效果 。