This paper studies the continuous-time reinforcement learning (RL) for optimal switching problems across multiple regimes. We consider a type of exploratory formulation under entropy regularization where the agent randomizes both the timing of switches and the selection of regimes through the generator matrix of an associated continuous-time finite-state Markov chain. We establish the well-posedness of the associated system of Hamilton-Jacobi-Bellman (HJB) equations and provide a characterization of the optimal policy. The policy improvement and the convergence of the policy iterations are rigorously established by analyzing the system of equations. We also show the convergence of the value function in the exploratory formulation towards the value function in the classical formulation as the temperature parameter vanishes. Finally, a reinforcement learning algorithm is devised and implemented by invoking the policy evaluation based on the martingale characterization. Our numerical examples with the aid of neural networks illustrate the effectiveness of the proposed RL algorithm.
翻译:本文研究多机制间最优切换问题的连续时间强化学习(RL)。我们考虑一种基于熵正则化的探索性框架,其中智能体通过关联连续时间有限状态马尔可夫链的生成矩阵,对切换时机和机制选择进行随机化。我们建立了相关哈密顿-雅可比-贝尔曼(HJB)方程组解的适定性,并给出了最优策略的刻画。通过分析该方程组,我们严格证明了策略改进与策略迭代的收敛性。我们还证明了当温度参数趋于零时,探索性框架中的值函数收敛于经典框架中的值函数。最后,基于鞅表征的策略评估,我们设计并实现了一种强化学习算法。借助神经网络的数值算例验证了所提RL算法的有效性。