Neural network subgrid stress models often have a priori performance that is far better than the a posteriori performance, leading to neural network models that look very promising a priori completely failing in a posteriori Large Eddy Simulations (LES). This performance gap can be decreased by combining two different methods, training data augmentation and reducing input complexity to the neural network. Augmenting the training data with two different filters before training the neural networks has no performance degradation a priori as compared to a neural network trained with one filter. A posteriori, neural networks trained with two different filters are far more robust across two different LES codes with different numerical schemes. In addition, by ablating away the higher order terms input into the neural network, the a priori versus a posteriori performance changes become less apparent. When combined, neural networks that use both training data augmentation and a less complex set of inputs have a posteriori performance far more reflective of their a priori evaluation.
翻译:神经网络亚网格应力模型通常表现出远优于后验性能的先验性能,导致在先验评估中极具潜力的神经网络模型在大涡模拟(LES)的后验测试中完全失效。通过结合两种不同方法——训练数据增强与降低神经网络输入复杂度——可有效缩小这一性能差距。在训练神经网络前,使用两种不同滤波器对训练数据进行增强,相较于仅使用单一滤波器训练的神经网络,其先验性能未见下降。后验测试表明,采用两种不同滤波器训练的神经网络在两种具有不同数值格式的LES代码中表现出更强的鲁棒性。此外,通过消融神经网络输入的高阶项,先验与后验性能之间的差异变得不再显著。当同时采用训练数据增强与简化输入集时,神经网络的后验性能能更准确地反映其先验评估结果。