The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production. As a consequence, active grid management is reaching its limits with conventional approaches. In the context of the Learning to Run a Power Network challenge, it has been shown that Reinforcement Learning (RL) is an efficient and reliable approach with considerable potential for automatic grid operation. In this article, we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent, both for the RL and the rule-based approach. The main improvement is a N-1 strategy, where we consider topology actions that keep the grid stable, even if one line is disconnected. More, we also propose a topology reversion to the original grid, which proved to be beneficial. The improvements are tested against reference approaches on the challenge test sets and are able to increase the performance of the rule-based agent by 27%. In direct comparison between rule-based and RL agent we find similar performance. However, the RL agent has a clear computational advantage. We also analyse the behaviour in an exemplary case in more detail to provide additional insights. Here, we observe that through the N-1 strategy, the actions of the agents become more diversified.
翻译:由于当前的动荡不安和可再生能源产量的增加,电力网运行变得越来越复杂。因此,传统方法的主动电网管理已经达到了极限。在Learning to Run a Power Network挑战赛的背景下,已经表明强化学习(RL)是一种高效可靠的自动电网运行方法。在本文中,我们分析了Binbinchen的提交代理,并提供了改进代理的新策略,包括RL和基于规则的方法。主要的改进是N-1策略,其中我们考虑拓扑操作,以保持电网的稳定性,即使有一根线路断开。此外,我们还提出了一个拓扑恢复到原始网格的策略,证明了这一方法的益处。改进后的策略在挑战测试集上进行了测试,并成功将基于规则的代理的性能提高了27%。在基于规则和RL代理之间进行直接比较时,我们发现性能相似,但是RL代理具有明显的计算优势。我们还对一个典型案例的行为进行了更详细的分析,以提供额外的见解。在这里,我们观察到通过N-1策略,智能体的行动变得更加多样化。