We study the problem of reinforcement learning (RL) with low (policy) switching cost - a problem well-motivated by real-life RL applications in which deployments of new policies are costly and the number of policy updates must be low. In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of $\widetilde{O}(\sqrt{H^4S^2AT})$ while requiring a switching cost of $O(HSA \log\log T)$. This is an exponential improvement over the best-known switching cost $O(H^2SA\log T)$ among existing methods with $\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T})$ regret. In the above, $S,A$ denotes the number of states and actions in an $H$-horizon episodic Markov Decision Process model with unknown transitions, and $T$ is the number of steps. As a byproduct of our new techniques, we also derive a reward-free exploration algorithm with a switching cost of $O(HSA)$. Furthermore, we prove a pair of information-theoretical lower bounds which say that (1) Any no-regret algorithm must have a switching cost of $\Omega(HSA)$; (2) Any $\widetilde{O}(\sqrt{T})$ regret algorithm must incur a switching cost of $\Omega(HSA\log\log T)$. Both our algorithms are thus optimal in their switching costs.
翻译:我们用低(政策)转换成本来研究强化学习(RL)问题,这是一个由实际使用RL应用程序引起的问题,在这种应用程序中,新政策部署费用昂贵,政策更新数量必须低。在本文中,我们提出基于阶段探索和适应性政策消除的新的算法,从而导致对$(全局){O}(Sqrt{H}{H4S/2T})的遗憾,同时需要为美元(HSA\log\logT)的转换成本(HSA\log T)美元。这是一个与最著名的转换成本$(H2SA\log T)相比的快速改进。作为我们新技术的副产品,我们变换O&(H,S,A)\qrt{T}(H,A)\sqrt{T}美元。在上文中,$S,A$(o,A$(h)表示州和行动的数量在美元-horion recion Markov决定过程模式中, 美元是步骤的数量。作为我们新技术的一个产品,我们也变换O成本。