We present a case study of model-free reinforcement learning (RL) framework to solve stochastic optimal control for a predefined parameter uncertainty distribution and partially observable system. We focus on robust optimal well control problem which is a subject of intensive research activities in the field of subsurface reservoir management. For this problem, the system is partially observed since the data is only available at well locations. Furthermore, the model parameters are highly uncertain due to sparsity of available field data. In principle, RL algorithms are capable of learning optimal action policies -- a map from states to actions -- to maximize a numerical reward signal. In deep RL, this mapping from state to action is parameterized using a deep neural network. In the RL formulation of the robust optimal well control problem, the states are represented by saturation and pressure values at well locations while the actions represent the valve openings controlling the flow through wells. The numerical reward refers to the total sweep efficiency and the uncertain model parameter is the subsurface permeability field. The model parameter uncertainties are handled by introducing a domain randomisation scheme that exploits cluster analysis on its uncertainty distribution. We present numerical results using two state-of-the-art RL algorithms, proximal policy optimization (PPO) and advantage actor-critic (A2C), on two subsurface flow test cases representing two distinct uncertainty distributions of permeability field. The results were benchmarked against optimisation results obtained using differential evolution algorithm. Furthermore, we demonstrate the robustness of the proposed use of RL by evaluating the learned control policy on unseen samples drawn from the parameter uncertainty distribution that were not used during the training process.
翻译:我们提出了一个无模型强化学习(RL)框架的案例研究,以解决预定义参数不确定性分布和部分可观测系统的最佳优化控制,我们重点研究在地下储油层管理领域开展密集研究活动的一个课题,即稳妥最佳良好控制问题。对于这一问题,由于数据只在井底地点提供,因此系统部分被观察到。此外,由于现有实地数据的广度,模型参数非常不确定。原则上,RL算法能够学习最佳行动政策 -- -- 从国家到行动的地图 -- -- 以尽量扩大一个数值奖励信号。在深RL中,从州到行动的这一绘图使用深层神经网络进行参数化。在RL制定强力最佳控制储油层储油层管理问题领域,国家以饱和和和压力值为代表,而行动则是控制井水流的阀开口。数字奖励是指总体扫荡效率和不确定模型参数是次表层渗透域。模型参数不确定性的处理方法是采用一个域随机随机随机随机化计划,利用对不确定性分布进行分组分析。在使用两个州-州-州-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-国-