Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining $\sqrt{T}$-type regret bound, where $T$ is the number of steps. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. In specific, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve $\tilde{O}(d^{3}H^5/\text{gap}_{\text{min}}\cdot \log(T))$ regret; and under the linear mixture model assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve $\tilde{O}(d^{2}H^5/\text{gap}_{\text{min}}\cdot \log^3(T))$ regret, where $d$ is the dimension of feature mapping, $H$ is the length of episode, and $\text{gap}_{\text{min}}$ is the minimum of sub-optimality gap. To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation.
翻译:使用线性函数近似值的强化学习(RL)最近受到越来越多的关注。然而,现有的工作重点是获得$\ qrt{T}T} $-type riggle sublict, 其中T$是步骤的数量。在本文件中,我们显示,在最近提出的两个线性 MDP 假设下,对数的遗憾是可以实现的,条件是在最佳行动价值功能方面存在积极的亚最佳差值。具体而言,在线性 MDP假设(Jin 等人 2019)下,LSVI-UB算法可以实现$\tilde{O}(d ⁇ 3}H5/\ text{gap{min{cdot\log}$;在线性混合模型假设(Ayob等人 和 Al. 2020)下,UCRCR-VTR算法可以实现$tilde{O} (d ⁇ 2}H%\ text{mincdrolog3(T), 其中, $H$是模型模型的长度和亚性正值的最短值。