A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments limit the effectiveness of many rule-based and machine learning-based decision-making approaches. The introduction of Reinforcement Learning in autonomous driving presents a promising solution to these challenges, although concerns about safety and efficiency during training remain major obstacles to its widespread application. To address these concerns, we propose a novel framework named Simple to Complex Collaborative Decision. First, we rapidly train the teacher model using the Proximal Policy Optimization algorithm in a lightweight autonomous driving simulation environment. In the more complex simulation environment, the teacher model intervenes when the student agent exhibits sub-optimal behavior by assessing the value of actions to avert dangerous situations. Next, we developed an innovative algorithm called Adaptive Clipping Proximal Policy Optimization. It trains using a combination of samples generated by both the teacher and student policies and applies dynamic clipping strategies based on sample importance, enabling the algorithm to utilize samples from diverse sources more efficiently. Additionally, we employ the KL divergence between the teacher's and student's policies as a constraint for policy optimization to facilitate the student agent's rapid learning of the teacher's policy. Finally, by adopting an appropriate weaning strategy to gradually reduce teacher intervention, we ensure that the student agent can fully explore the environment independently during the later stages of training. Simulation experiments in highway lane-change scenarios demonstrate that, compared to baseline algorithms, our proposed framework not only improves learning efficiency and reduces training costs but also significantly enhances safety during training.
翻译:暂无翻译