A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments limits the effectiveness of many rule-based and machine learning approaches. Reinforcement Learning, with its robust self-learning capabilities and environmental adaptability, offers a promising solution to these challenges. Nevertheless, safety and efficiency concerns during training hinder its widespread application. To address these concerns, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD). First, we rapidly train the teacher model in a lightweight simulation environment. In the more complex and realistic environment, the teacher intervenes when the student agent exhibits suboptimal behavior by assessing actions' value to avert dangers. We also introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization (ACPPO), which combines samples from both teacher and student policies and employs dynamic clipping strategies based on sample importance. This approach improves sample efficiency while effectively alleviating data imbalance. Additionally, we employ the Kullback-Leibler divergence as a policy constraint, transforming it into an unconstrained problem with the Lagrangian method to accelerate the student's learning. Finally, a gradual weaning strategy ensures that the student learns to explore independently over time, overcoming the teacher's limitations and maximizing performance. Simulation experiments in highway lane-change scenarios show that the S2CD framework enhances learning efficiency, reduces training costs, and significantly improves safety compared to state-of-the-art algorithms. This framework also ensures effective knowledge transfer between teacher and student models, even with a suboptimal teacher, the student achieves superior performance, demonstrating the robustness and effectiveness of S2CD.
翻译:暂无翻译