We study the continuous-time counterpart of Q-learning for reinforcement learning (RL) under the entropy-regularized, exploratory diffusion process formulation introduced by Wang et al. (2020). As the conventional (big) Q-function collapses in continuous time, we consider its first-order approximation and coin the term ``(little) q-function". This function is related to the instantaneous advantage rate function as well as the Hamiltonian. We develop a ``q-learning" theory around the q-function that is independent of time discretization. Given a stochastic policy, we jointly characterize the associated q-function and value function by martingale conditions of certain stochastic processes, in both on-policy and off-policy settings. We then apply the theory to devise different actor-critic algorithms for solving underlying RL problems, depending on whether or not the density function of the Gibbs measure generated from the q-function can be computed explicitly. One of our algorithms interprets the well-known Q-learning algorithm SARSA, and another recovers a policy gradient (PG) based continuous-time algorithm proposed in Jia and Zhou (2022b). Finally, we conduct simulation experiments to compare the performance of our algorithms with those of PG-based algorithms in Jia and Zhou (2022b) and time-discretized conventional Q-learning algorithms.
翻译:我们研究了Wang等人(2020)引入的在熵正则化下的探索性扩散过程框架下的强化学习(q学习)的连续时间版本。由于传统的Q函数在连续时间中崩溃,因此我们考虑其一阶近似,并称之为“小q函数”。这个函数与瞬时优势率函数以及哈密顿量有关。我们围绕q函数开发了一个与时间离散化无关的“q学习”理论。在一定的随机策略下,我们借助某些随机过程的鞅条件共同刻画了相应的q函数和值函数,既适用于on-policy,也适用于off-policy设置。然后,我们应用该理论,根据从q函数生成的Gibbs测度的密度函数是否能够被显式计算来设计不同的演员-评论家算法以解决潜在的强化学习问题。我们的一种算法解释了SARSA的著名Q学习算法,另一种则恢复了Jia和Zhou(2022b)提出的基于策略梯度(PG)的连续时间算法。最后,我们进行模拟实验,将我们的算法与Jia和Zhou(2022b)的基于PG的算法以及时间离散的传统Q学习算法进行了比较。