Deep reinforcement learning (DRL) is a promising way to achieve human-like autonomous driving. To further advance the technology in this area, in this paper we propose a novel framework to incorporate human prior knowledge in the training of DRL agents. Our framework consists of three ingredients, namely expert demonstration, policy derivation, and reinforcement learning. In the expert demonstration step, a human expert demonstrates their execution of the task, and their behaviors are stored as state-action pairs. In the policy derivation step, the imitative expert policy is derived using behavioral cloning and uncertainty estimation relying on the demonstration data. In the reinforcement learning step, the imitative expert policy is utilized to guide the learning of the DRL agent by regularizing the KL divergence between the DRL agent's policy and the imitative expert policy. To validate the proposed method in autonomous driving applications, two simulated urban driving scenarios, i.e., the unprotected left turn and roundabout, are designed along with human expert demonstrations. The strengths of our proposed method are manifested by the training results as our method can not only achieve the best performance but also significantly improve the sample efficiency in comparison with the baseline algorithms (particularly 60% improvement compared to soft actor-critic). In testing conditions, the agent trained by our method obtains the highest success rate and shows diverse driving behaviors with human-like features demonstrated by the human expert. We also demonstrate that the imitative expert policy with deep ensemble-based uncertainty estimation can lead to better performance, especially in a more difficult task. As a consequence, the proposed method has shown its great potential to facilitate the applications of DRL-enabled human-like autonomous driving in practice.
翻译:深度强化学习(DRL)是实现人性化自主驱动(DRL)的一个很有希望的方法。为了进一步推进这一领域的技术,我们在本文件中提出一个新的框架,将人性先前的知识纳入DRL代理人的培训中。我们的框架包括三个要素,即专家演示、政策制定和强化学习。在专家演示步骤中,一位人类专家展示他们执行任务的情况,他们的行为以州-行动对等形式储存起来。在政策衍生步骤中,模仿专家政策是利用行为克隆和不确定性估算法来得出。在强化学习步骤中,我们提出一个新框架,将人性先前的知识纳入DRL代理人的培训中。我们的模拟专家政策框架包括三个要素,即:专家演示、模拟城市驾驶方案,即无保护的左转和圆环。我们拟议方法的优点表现为:我们的方法不仅能达到最佳业绩,而且大大改进了DRL代理人的深度数据学习过程,将KL代理人的政策和模仿政策之间的差异加以规范。我们所展示的试算方法比了人类性高的试算法。我们所展示的试算法的试算方法,比了人类的试算方法,比了人类最差的试算方法,我们更难的试算方法,还显示了人类的试算方法,比了人类的试算方法,比了人类的试算方法,比了人类行得得得得得得得更精准的试算方法。