We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG). DNN-MG is a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver, i.e. a highly efficient classical solution scheme, with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. Previous work on DNN-MG focused on the overall scheme and how to enforce divergence freedom in the solution. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations. Our results demonstrate that larger networks are able to capture the flow behavior better while requiring only little additional training time. At runtime, the use of the neural network correction can even reduce the computation time compared to a classical multigrid simulation through a faster convergence of the nonlinear solve that is required at every time step.
翻译:我们调查深神经网络多格方法(DNNN-MG)的缩放和效率。 DNN-MG是一种新型神经网络技术,用于模拟Navier-Stokes等式,模拟Navier-Stokes等式,将适应性几何多格求解器(即高度高效的古典解决方案方案)与一个带有记忆的经常性神经网络相结合。神经网络在DNN-MG中以一个或多个最佳的多格格中替换,并在下一步对古典解决方案进行校正。这导致解决方案质量的微小退化,同时大幅降低总体计算成本。与此同时,在粗略尺度上使用多格解析解析器,使得能够使用一个易于培训、精良化和能够纳入物理限制的紧凑网络。 DNNNNM-MG以往的工作侧重于总体方案以及如何在解决方案中实施差异自由。在这项工作中,我们调查网络规模如何影响培训和解决方案的质量以及计算的总体运行时间。我们的结果表明,更大的网络能够更好地捕捉流动行为,同时只需要更多的培训时间来进行快速的模拟,而每次运行的网络的模型的模型的模型的模拟化,在每次运行中进行着一个快速的计算。