This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators.
翻译:本文提出了自主控制和将可再生能源纳入智能电网系统的多试剂深层强化学习框架(DRL),特别是拟议框架共同考虑住宅终端用户的需求响应和分配能源管理(DEM),DR具有广泛承认的提高电网稳定性和可靠性的潜力,同时减少终端用户的能源账单;然而,传统DR技术存在若干缺陷,例如无法处理操作不确定性,同时造成终端用户不能使用,从而无法在现实世界应用中广泛采用。拟议框架通过实施基于实时定价战略的DR和DEM,解决了这些缺陷。此外,这一框架使电网服务提供商能够利用分散的能源资源(即PV屋顶板和电池储存)作为在高峰时段支持智能电网的可发送资产,从而实现对分配能源的管理。基于深Q网络的模拟结果显示,在实际应用中普遍采用DQN(DQN),从而大大改进了在使用深度强化学习后实现的DRR和DEM的实时定价战略,从而解决了这些缺陷。此外,这一框架还使电网供应商能够利用分散的能源资源(即PV板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板板和电池储存)作为在使用和电网的24小时累积电网中的主要降低。