Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy through reinforcement learning in the high-fidelity simulator, which performs better-than supervised imitation learning.
翻译:由于难以学习最佳驾驶政策,传统模块化管道严重依赖手工设计的规则和预处理认知系统,而受监督的学习模式则因广泛人类经验的可及性而受到限制。我们提出了一个普遍和有原则的可控模拟强化学习(CIRL)方法,它使驱动器成功实现更高的成功率,其依据只是高纤维汽车模拟器中的视觉投入。为了降低大型连续行动空间的低探索效率,该空间往往禁止使用传统RL挑战性实际任务,我们CIRL探索了受合理限制的行动空间,以模拟人类示范的编码经验为指导,在深固化政策梯度(DDPG)的基础上建立。此外,我们提议根据共同表述,为不同的控制信号(即随后、直线、右转、左)专门制定适应政策和方向-缠绕式奖励设计,以提高处理不同案件的模型能力。关于CARLA驱动基准的广泛实验表明,CIRL驱动力基准大大超越了行动空间上的合理约束性行动空间,以模拟人类示范性示范,在深定型政策梯级梯度(DPG) 成功展示了我们以往学习能力中的所有方法。