Active network management (ANM) of electricity distribution networks include many complex stochastic sequential optimization problems. These problems need to be solved for integrating renewable energies and distributed storage into future electrical grids. In this work, we introduce Gym-ANM, a framework for designing reinforcement learning (RL) environments that model ANM tasks in electricity distribution networks. These environments provide new playgrounds for RL research in the management of electricity networks that do not require an extensive knowledge of the underlying dynamics of such systems. Along with this work, we are releasing an implementation of an introductory toy-environment, ANM6-Easy, designed to emphasize common challenges in ANM. We also show that state-of-the-art RL algorithms can already achieve good performance on ANM6-Easy when compared against a model predictive control (MPC) approach. Finally, we provide guidelines to create new Gym-ANM environments differing in terms of (a) the distribution network topology and parameters, (b) the observation space, (c) the modelling of the stochastic processes present in the system, and (d) a set of hyperparameters influencing the reward signal. Gym-ANM can be downloaded at https://github.com/robinhenry/gym-anm.
翻译:在这项工作中,我们引入了Gym-AMNM,这是设计强化学习环境的框架,在配电网络中示范ANM任务,这种环境为RL在管理电力网络方面的研究提供了新的操场,不需要广泛了解这些系统的基本动态。在进行这项工作的同时,我们正在推出一个介绍性玩具环境,ANM6-Easy,目的是强调ANM的共同挑战。我们还表明,与模型预测控制(MPC)方法相比,最先进的RNM6-Easy算法在ANM6-Easy上已经能够取得良好的表现。最后,我们为创建新的Gym-AMNM环境提供了指导方针,这些环境在(a) 分布性网络的表层和参数,(b) 观测空间,(c) 模拟系统中存在的沙眼过程,(d) 正在下载的ASMM/Ambrmams。