Modeling the subgrid-scale dynamics of reduced models is a long standing open problem that finds application in ocean, atmosphere and climate predictions where direct numerical simulation (DNS) is impossible. While neural networks (NNs) have already been applied to a range of three-dimensional problems with success, the backward energy transfer of two-dimensional flows still remains a stability issue for trained models. We show that learning a model jointly with the dynamical solver and a meaningful $\textit{a posteriori}$-based loss function lead to stable and realistic simulations when applied to quasi-geostrophic turbulence.
翻译:在无法进行直接数字模拟的海洋、大气和气候预测中,模拟减少模型的亚电网规模动态是一个长期存在的未决问题,在无法进行直接数字模拟的海洋、大气和气候预测中,这个问题已经应用到一系列三维问题中,虽然神经网络(NNs)已经成功应用到一系列三维问题中,但二维流的后向能量转移对于经过培训的模型来说仍然是一个稳定性问题。我们表明,与动态求解器和有意义的美元(textit{a folori}$基损失函数一起学习一个模型,在应用于准地球营养动荡时,可以导致稳定和现实的模拟。