In this work, we revisit the marking decisions made in the standard adaptive finite element method (AFEM). Experience shows that a na\"{i}ve marking policy leads to inefficient use of computational resources for adaptive mesh refinement (AMR). Consequently, using AFEM in practice often involves ad-hoc or time-consuming offline parameter tuning to set appropriate parameters for the marking subroutine. To address these practical concerns, we recast AMR as a Markov decision process in which refinement parameters can be selected on-the-fly at run time, without the need for pre-tuning by expert users. In this new paradigm, the refinement parameters are also chosen adaptively via a marking policy that can be optimized using methods from reinforcement learning. We use the Poisson equation to demonstrate our techniques on $h$- and $hp$-refinement benchmark problems, and our experiments suggest that superior marking policies remain undiscovered for many classical AFEM applications. Furthermore, an unexpected observation from this work is that marking policies trained on one family of PDEs are sometimes robust enough to perform well on problems far outside the training family. For illustration, we show that a simple $hp$-refinement policy trained on 2D domains with only a single re-entrant corner can be deployed on far more complicated 2D domains, and even 3D domains, without significant performance loss. For reproduction and broader adoption, we accompany this work with an open-source implementation of our methods.
翻译:在这项工作中,我们重新审视了标准适应性限定要素方法(AFEM)中所作的标记决定。经验显示,在标准适应性限定要素方法(AFEM)中,标记政策导致计算资源用于适应性网格改进(AMR)的利用效率低下。因此,在实践中,使用AFEM在实践中往往涉及临时热或耗时的离线参数调整,以便为标记子例程设定适当的参数。为了解决这些实际问题,我们重新将AMR作为一个马尔科夫决定程序,可以在运行时在飞行中选择精细化参数,而不需要专家用户事先调整。在这一新模式中,精细的参数也是通过一种利用强化学习方法优化的标记政策来适应性选择的。因此,我们使用Poisson等式来展示我们用$和$hp$的精细度基准问题的技术,我们的实验表明,对于许多传统的AFEM应用,高级标记政策仍然没有被忽略。此外,从这项工作中得出一个意外的观察是,在某一类PDE所培训的政策中,有时足够坚固,能够很好地处理远远的外部问题,在培训的D领域以外的问题,我们所部署的2个领域,我们只能用一个简单的、更复杂的领域显示一个简单的领域。