The adaptive traffic signal control (ATSC) problem can be modeled as a multiagent cooperative game among urban intersections, where intersections cooperate to optimize their common goal. Recently, reinforcement learning (RL) has achieved marked successes in managing sequential decision making problems, which motivates us to apply RL in the ASTC problem. Here we use independent reinforcement learning (IRL) to solve a complex traffic cooperative control problem in this study. One of the largest challenges of this problem is that the observation information of intersection is typically partially observable, which limits the learning performance of IRL algorithms. To this, we model the traffic control problem as a partially observable weak cooperative traffic model (PO-WCTM) to optimize the overall traffic situation of a group of intersections. Different from a traditional IRL task that averages the returns of all agents in fully cooperative games, the learning goal of each intersection in PO-WCTM is to reduce the cooperative difficulty of learning, which is also consistent with the traffic environment hypothesis. We also propose an IRL algorithm called Cooperative Important Lenient Double DQN (CIL-DDQN), which extends Double DQN (DDQN) algorithm using two mechanisms: the forgetful experience mechanism and the lenient weight training mechanism. The former mechanism decreases the importance of experiences stored in the experience reply buffer, which deals with the problem of experience failure caused by the strategy change of other agents. The latter mechanism increases the weight experiences with high estimation and `leniently' trains the DDQN neural network, which improves the probability of the selection of cooperative joint strategies. Experimental results show that CIL-DDQN outperforms other methods in almost all performance indicators of the traffic control problem.
翻译:适应性交通信号控制(ATSC)问题可以作为城市交叉点之间的多试剂合作游戏(ATC)问题模型,交叉点合作优化其共同目标。最近,强化学习(RL)在管理连续决策问题方面取得了显著成功,这促使我们在ASTC问题中应用RL。在这里,我们使用独立强化学习(IRL)来解决复杂的交通合作控制问题。这一问题的最大挑战之一是,交叉点的观测信息一般是部分可观测的,这限制了IRL算法的学习性能。为此,我们将交通控制问题模拟为部分可见的薄弱合作交通模式(PO-WCTM),以优化一组交叉点的总体交通状况。不同于传统的IRL学习(RL)任务,在ACTTC问题中,每个交叉点的学习目标是减少合作性学习困难,这也符合交通环境假设。我们还提议了一个IRL算法,称为合作性精度双QQN(CIL-DQN) (CIL-DQN) (CIL-DQN) (部分可见的薄弱的合作交通模式),以优化一组交叉点模式优化整体交通状况。不同于传统的IRN(DQN) 规则, 升级变暖变暖变暖变换变缩缩缩缩缩缩缩缩缩缩缩缩缩操作机制, 导致前变变变变变变变变变变变变变换变变换的系统。