Deep reinforcement learning has shown promise in various engineering applications, including vehicular traffic control. The non-stationary nature of traffic, especially in the lane-free environment with more degrees of freedom in vehicle behaviors, poses challenges for decision-making since a wrong action might lead to a catastrophic failure. In this paper, we propose a novel driving strategy for Connected and Automated Vehicles (CAVs) based on a competitive Multi-Agent Deep Deterministic Policy Gradient approach. The developed multi-agent deep reinforcement learning algorithm creates a dynamic and non-stationary scenario, mirroring real-world traffic complexities and making trained agents more robust. The algorithm's reward function is strategically and uniquely formulated to cover multiple vehicle control tasks, including maintaining desired speeds, overtaking, collision avoidance, and merging and diverging maneuvers. Moreover, additional considerations for both lateral and longitudinal passenger comfort and safety criteria are taken into account. We employed inter-vehicle forces, known as nudging and repulsive forces, to manage the maneuvers of CAVs in a lane-free traffic environment. The proposed driving algorithm is trained and evaluated on lane-free roads using the Simulation of Urban Mobility platform. Experimental results demonstrate the algorithm's efficacy in handling different objectives, highlighting its potential to enhance safety and efficiency in autonomous driving within lane-free traffic environments.
翻译:暂无翻译