Real-time control of pumps can be an infeasible task in water distribution systems (WDSs) because the calculation to find the optimal pump speeds is resource-intensive. The computational need cannot be lowered even with the capabilities of smart water networks when conventional optimization techniques are used. Deep reinforcement learning (DRL) is presented here as a controller of pumps in two WDSs. An agent based on a dueling deep q-network is trained to maintain the pump speeds based on instantaneous nodal pressure data. General optimization techniques (e.g., Nelder-Mead method, differential evolution) serve as baselines. The total efficiency achieved by the DRL agent compared to the best performing baseline is above 0.98, whereas the speedup is around 2x compared to that. The main contribution of the presented approach is that the agent can run the pumps in real-time because it depends only on measurement data. If the WDS is replaced with a hydraulic simulation, the agent still outperforms conventional techniques in search speed.
翻译:在水分配系统(WDS)中,对水泵的实时控制可能是一项不可行的任务,因为寻找最佳泵速度的计算是资源密集型的。即使使用常规优化技术时智能水网络的能力,计算上的需求也不可能降低。深强化学习(DRL)在这里作为两个WDS中的泵控制器提出。基于深度网络的分决剂经过培训,以根据瞬时节压数据维持泵速度。一般优化技术(例如Nelder-Mead方法、差分演)作为基线。DRL代理实现的总效率高于0.98,而与最佳运行基线相比,加速率约为2x。提出的方法的主要贡献是,该代理器可以实时操作泵,因为它只依靠测量数据。如果用液压模拟取代WDS,则该代理在搜索速度方面仍然超过常规技术。