Reinforcement learning has shown great potential in developing high-level autonomous driving. However, for high-dimensional tasks, current RL methods suffer from low data efficiency and oscillation in the training process. This paper proposes an algorithm called Learn to drive with Virtual Memory (LVM) to overcome these problems. LVM compresses the high-dimensional information into compact latent states and learns a latent dynamic model to summarize the agent's experience. Various imagined latent trajectories are generated as virtual memory by the latent dynamic model. The policy is learned by propagating gradient through the learned latent model with the imagined latent trajectories and thus leads to high data efficiency. Furthermore, a double critic structure is designed to reduce the oscillation during the training process. The effectiveness of LVM is demonstrated by an image-input autonomous driving task, in which LVM outperforms the existing method in terms of data efficiency, learning stability, and control performance.
翻译:强化学习在开发高水平自主驱动方面显示出巨大的潜力,然而,对于高层次任务而言,目前的RL方法在培训过程中数据效率低,且振荡速度低。本文提出了一种算法,称为“学习与虚拟内存一起驱动”以克服这些问题。LVM将高维信息压缩到紧凑的潜伏状态,并学习一种潜在动态模型来总结代理人的经验。各种想象中的潜伏轨迹都是由潜伏动态模型生成的虚拟记忆。该政策是通过通过所想象的潜伏模型传播梯度,从而导致数据效率高。此外,设计了一种双重批评结构,以减少培训过程中的振动。LVM的效力表现在图像输入自主驱动任务中,在数据效率、学习稳定性和控制性能方面,LVM比现有的方法要好。