Lane change in dense traffic typically requires the recognition of an appropriate opportunity for maneuvers, which remains a challenging problem in self-driving. In this work, we propose a chance-aware lane-change strategy with high-level model predictive control (MPC) through curriculum reinforcement learning (CRL). In our proposed framework, full-state references and regulatory factors concerning the relative importance of each cost term in the embodied MPC are generated by a neural policy. Furthermore, effective curricula are designed and integrated into an episodic reinforcement learning (RL) framework with policy transfer and enhancement, to improve the convergence speed and ensure a high-quality policy. The proposed framework is deployed and evaluated in numerical simulations of dense and dynamic traffic. It is noteworthy that, given a narrow chance, the proposed approach generates high-quality lane-change maneuvers such that the vehicle merges into the traffic flow with a high success rate of 96%. Finally, our framework is validated in the high-fidelity simulator under dense traffic, demonstrating satisfactory practicality and generalizability.
翻译:暂无翻译