In a ride-hailing system, an optimal relocation of vacant vehicles can significantly reduce fleet idling time and balance the supply-demand distribution, enhancing system efficiency and promoting driver satisfaction and retention. Model-free deep reinforcement learning (DRL) has been shown to dynamically learn the relocating policy by actively interacting with the intrinsic dynamics in large-scale ride-hailing systems. However, the issues of sparse reward signals and unbalanced demand and supply distribution place critical barriers in developing effective DRL models. Conventional exploration strategy (e.g., the $\epsilon$-greedy) may barely work under such an environment because of dithering in low-demand regions distant from high-revenue regions. This study proposes the deep relocating option policy (DROP) that supervises vehicle agents to escape from oversupply areas and effectively relocate to potentially underserved areas. We propose to learn the Laplacian embedding of a time-expanded relocation graph, as an approximation representation of the system relocation policy. The embedding generates task-agnostic signals, which in combination with task-dependent signals, constitute the pseudo-reward function for generating DROPs. We present a hierarchical learning framework that trains a high-level relocation policy and a set of low-level DROPs. The effectiveness of our approach is demonstrated using a custom-built high-fidelity simulator with real-world trip record data. We report that DROP significantly improves baseline models with 15.7% more hourly revenue and can effectively resolve the dithering issue in low-demand areas.
翻译:在搭便车系统中,最佳地迁移空闲车辆可大大减少车队闲置时间,平衡供需分配,提高系统效率,提高司机满意度和保留率。在远离高收入区域的低需求区域,无型深层强化学习(DRL)显示,通过积极与大型载客系统内在动态进行互动,动态地学习搬迁政策。然而,微弱的奖励信号以及不平衡的供需分配问题为开发有效的DRL模式设置了关键障碍。常规勘探战略(例如,$=epsilon-greedy)在这种环境下可能几乎无法发挥作用,因为远离高收入区域的低需求区域会发生波动,提高系统效率;本研究提出了深层迁移选项政策(DROP),以监督车辆代理人逃离过度供货区,并有效地迁移到可能得不到充分服务的地区。我们提议学习拉巴氏式迁移图,作为系统搬迁政策的近似代表。嵌入式低额智能信号与任务依赖的智能信号相结合,在高需求区域间形成虚拟的分辨率定位模型。我们目前高水平的DRDRF 正在学习一个高水平的高级数据库。