Reinforcement learning-based (RL-based) energy management strategy (EMS) is considered a promising solution for the energy management of electric vehicles with multiple power sources. It has been shown to outperform conventional methods in energy management problems regarding energy-saving and real-time performance. However, previous studies have not systematically examined the essential elements of RL-based EMS. This paper presents an empirical analysis of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell Electric Vehicle (FCEV). The empirical analysis is developed in four aspects: algorithm, perception and decision granularity, hyperparameters, and reward function. The results show that the Off-policy algorithm effectively develops a more fuel-efficient solution within the complete driving cycle compared with other algorithms. Improving the perception and decision granularity does not produce a more desirable energy-saving solution but better balances battery power and fuel consumption. The equivalent energy optimization objective based on the instantaneous state of charge (SOC) variation is parameter sensitive and can help RL-EMSs to achieve more efficient energy-cost strategies.
翻译:实践分析分为四个方面:算法、感知和决定颗粒度、超参数和奖励功能。结果显示,离岸价格算法与其它算法相比,在整个驱动周期内有效开发出一种燃料效率更高的解决办法。改进概念和决定颗粒性不会产生更可取的节能解决办法,而是更好地平衡电池电能和燃料消耗。基于即时电量变化的等效能源优化目标具有参数敏感性,能够帮助REL-EMS实现更有效率的能源成本战略。