Optimal Power Flow (OPF) is a very traditional research area within the power systems field that seeks for the optimal operation point of electric power plants, and which needs to be solved every few minutes in real-world scenarios. However, due to the nonconvexities that arise in power generation systems, there is not yet a fast, robust solution technique for the full Alternating Current Optimal Power Flow (ACOPF). In the last decades, power grids have evolved into a typical dynamic, non-linear and large-scale control system, known as the power system, so searching for better and faster ACOPF solutions is becoming crucial. Appearance of Graph Neural Networks (GNN) has allowed the natural use of Machine Learning (ML) algorithms on graph data, such as power networks. On the other hand, Deep Reinforcement Learning (DRL) is known for its powerful capability to solve complex decision-making problems. Although solutions that use these two methods separately are beginning to appear in the literature, none has yet combined the advantages of both. We propose a novel architecture based on the Proximal Policy Optimization algorithm with Graph Neural Networks to solve the Optimal Power Flow. The objective is to design an architecture that learns how to solve the optimization problem and that is at the same time able to generalize to unseen scenarios. We compare our solution with the DCOPF in terms of cost after having trained our DRL agent on IEEE 30 bus system and then computing the OPF on that base network with topology changes
翻译:最佳电力流(OPF)是电力系统领域中一个非常传统的研究领域,它寻求电厂最佳运行点,需要每几分钟在现实世界情景下解决一次。然而,由于发电系统出现非混杂现象,目前还没有一种快速、有力的解决方案技术来完全互换当前最佳电力流。在过去几十年中,电网已经演变成典型的动态、非线性和大规模控制系统,称为电力系统,因此寻找更好和更快的ACOPF解决方案变得至关重要。图表神经网络(GNNN)的外观使得机器学习(ML)算法在图形数据(例如电力网络)上得以自然使用。另一方面,深加力学习(DRL)的强大能力在解决复杂的决策问题方面是众所周知的。虽然在文献中开始使用这两种方法,但至今还没有将两者的优势结合起来。我们根据Proximal 政策(ACOPF) 智能网络(GNNNNN) 的外观使得机器学习(ML) 算法能够自然使用机器学习(ML) 算法在图形结构中, 也就是我们最高级的电路路路流学到最优化结构, 学会是如何解决我们最高级的电流学上最高级的电路路的系统。