The prevailing reinforcement-learning-based traffic signal control methods are typically staging-optimizable or duration-optimizable, depending on the action spaces. In this paper, we propose a novel control architecture, TBO, which is based on hybrid proximal policy optimization. To the best of our knowledge, TBO is the first RL-based algorithm to implement synchronous optimization of the staging and duration. Compared to discrete and continuous action spaces, hybrid action space is a merged search space, in which TBO better implements the trade-off between frequent switching and unsaturated release. Experiments are given to demonstrate that TBO reduces the queue length and delay by 13.78% and 14.08% on average, respectively, compared to the existing baselines. Furthermore, we calculate the Gini coefficients of the right-of-way to indicate TBO does not harm fairness while improving efficiency.
翻译:现有的基于强化学习的交通信号控制方法通常视行动空间而定,可以中转或延长时间限制。 在本文中,我们提出一个新的控制结构TBO,它以混合近似政策优化为基础。据我们所知,TBO是第一个基于RL的算法,可以同步优化中转和持续操作空间。与离散和连续操作空间相比,混合行动空间是一个合并的搜索空间,TBO可以更好地在频繁切换和不饱和释放之间实现平衡。我们进行了实验,以证明TBO与现有基线相比,平均将排队长度和延迟分别减少13.78%和14.08%。此外,我们计算了路权基尼系数,以表明TBO在提高效率的同时不会损害公平。