Reinforcement learning (RL)-based driver assistance systems seek to improve fuel consumption via continual improvement of powertrain control actions considering experiential data from the field. However, the need to explore diverse experiences in order to learn optimal policies often limits the application of RL techniques in safety-critical systems like vehicle control. In this paper, an exponential control barrier function (ECBF) is derived and utilized to filter unsafe actions proposed by an RL-based driver assistance system. The RL agent freely explores and optimizes the performance objectives while unsafe actions are projected to the closest actions in the safe domain. The reward is structured so that driver's acceleration requests are met in a manner that boosts fuel economy and doesn't compromise comfort. The optimal gear and traction torque control actions that maximize the cumulative reward are computed via the Maximum a Posteriori Policy Optimization (MPO) algorithm configured for a hybrid action space. The proposed safe-RL scheme is trained and evaluated in car following scenarios where it is shown that it effectively avoids collision both during training and evaluation while delivering on the expected fuel economy improvements for the driver assistance system.
翻译:强化学习(RL)驱动器援助系统寻求通过不断改进电动控制行动来改善燃料消耗,同时考虑到实地的经验数据;然而,需要探索各种经验,以便学习最佳政策,这往往限制在车辆控制等安全关键系统中应用RL技术;在本文件中,一个指数控制屏障功能(ECBF)产生并用于过滤基于RL的驱动器援助系统提议的不安全行动;RL代理器自由探索并优化性能目标,而不安全行动预计将在安全领域采取最接近的行动;奖励的结构安排是,满足司机的加速请求的方式能够促进燃料经济,不会损害舒适;最大限度增加累积奖励的最佳齿轮和牵引力控制行动是通过为混合行动空间配置的“后置政策优化”算法计算的;拟议的安全-RL计划在汽车上进行培训和评价,根据这些假设,显示在为司机援助系统提供预期的燃料经济改进的同时,在培训和评估期间有效避免碰撞。