Turbulence simulation with classical numerical solvers requires high-resolution grids to accurately resolve dynamics. Here we train learned simulators at low spatial and temporal resolutions to capture turbulent dynamics generated at high resolution. We show that our proposed model can simulate turbulent dynamics more accurately than classical numerical solvers at the comparably low resolutions across various scientifically relevant metrics. Our model is trained end-to-end from data and is capable of learning a range of challenging chaotic and turbulent dynamics at low resolution, including trajectories generated by the state-of-the-art Athena++ engine. We show that our simpler, general-purpose architecture outperforms various more specialized, turbulence-specific architectures from the learned turbulence simulation literature. In general, we see that learned simulators yield unstable trajectories; however, we show that tuning training noise and temporal downsampling solves this problem. We also find that while generalization beyond the training distribution is a challenge for learned models, training noise, added loss constraints, and dataset augmentation can help. Broadly, we conclude that our learned simulator outperforms traditional solvers run on coarser grids, and emphasize that simple design choices can offer stability and robust generalization.
翻译:古典数字求解器的图解模拟需要高分辨率网格来准确解解动态。 我们在这里以低空间和时时空分辨率培训学习的模拟器, 以捕捉高分辨率产生的动荡动态。 我们显示,我们提议的模型比在各种科学相关度的可比低分辨率中模拟的典型数字求解器更精确地模拟动荡动态。 我们的模型从数据中培训端到端,并且能够从低分辨率中学习一系列具有挑战性的混乱和动荡动态, 包括由最先进的雅典纳++引擎产生的轨迹。 我们显示, 我们的更简单、通用的架构超越了从所学的动荡模拟文献中产生的各种更专门、针对具体的动荡结构。 我们一般地看到, 我们所学的模拟器会产生不稳定的轨迹; 然而, 我们显示, 调整培训的噪音和时间下游标解决了这个问题。 我们还发现, 培训分布以外的一般分布对于学习模型、 培训噪音、 增加损失制约和数据配置增强都是一种挑战。 广义上, 我们的结论是, 我们所学的模拟器简单的模拟器超越了常规设计中的稳定度, 和制式的系统可以运行。