Deep Reinforcement Learning (DRL) has shown remarkable success in solving complex tasks across various research fields. However, transferring DRL agents to the real world is still challenging due to the significant discrepancies between simulation and reality. To address this issue, we propose a robust DRL framework that leverages platform-dependent perception modules to extract task-relevant information and train a lane-following and overtaking agent in simulation. This framework facilitates the seamless transfer of the DRL agent to new simulated environments and the real world with minimal effort. We evaluate the performance of the agent in various driving scenarios in both simulation and the real world, and compare it to human players and the PID baseline in simulation. Our proposed framework significantly reduces the gaps between different platforms and the Sim2Real gap, enabling the trained agent to achieve similar performance in both simulation and the real world, driving the vehicle effectively.
翻译:深度强化学习(DRL)在解决各种研究领域的复杂任务方面取得了显著的成功。但是,由于模拟和现实之间存在显著差异,将DRL智能体转移到现实世界仍然具有挑战性。为解决这个问题,我们提出了一个强健的DRL框架,利用平台依赖的感知模块提取任务相关信息,并在模拟中训练道路跟踪和超车代理。该框架有助于将DRL代理无缝转移到新的模拟环境和现实世界,做到最小的投入。我们评估了代理在模拟和现实世界中的各种驾驶场景中的表现,并将其与人类玩家和PID基线在模拟中进行了比较。我们提出的框架显著减少了不同平台和Sim2Real之间的差距,使训练代理能够在模拟和现实世界中实现类似的性能,有效地驾驶车辆。