Simulations of complex physical systems are typically realized by discretizing partial differential equations (PDEs) on unstructured meshes. While neural networks have recently been explored for surrogate and reduced order modeling of PDE solutions, they often ignore interactions or hierarchical relations between input features, and process them as concatenated mixtures. We generalize the idea of conditional parameterization -- using trainable functions of input parameters to generate the weights of a neural network, and extend them in a flexible way to encode critical information. Inspired by discretized numerical methods, choices of the parameters include physical quantities and mesh topology features. The functional relation between the modeled features and the parameters is built into the network architecture. The method is implemented on different networks and applied to frontier scientific machine learning tasks including the discovery of unmodeled physics, super-resolution of coarse fields, and the simulation of unsteady flows with chemical reactions. The results show that the conditionally-parameterized networks provide superior performance compared to their traditional counterparts. The CP-GNet - an architecture that can be trained on very few data snapshots - is proposed as the first deep learning model capable of standalone prediction of reacting flows on irregular meshes.
翻译:复杂物理系统的模拟通常通过在非结构化的模子上分化部分方程式(PDEs)来实现。虽然最近探索了神经网络以替代和减少PDE解决方案的排序模型,但它们往往忽视输入特性之间的相互作用或等级关系,并将它们作为混合混合物处理。我们推广了有条件参数化的想法 -- -- 使用输入参数的训练功能来生成神经网络的重量,并以灵活的方式将其扩展以编码关键信息。在离散数字方法的启发下,参数的选择包括物理数量和网状表层特征。模型特征和参数之间的功能关系被建在网络结构中。该方法在不同的网络上实施,并应用于前沿科学机器学习任务,包括发现非模型化物理学,密片场的超级分辨率,以及模拟化学反应的不稳定流动。结果显示,与传统对应者相比,有条件的参数化网络具有优异性性。CP-GNet是一个可以在很少的数据快图上接受培训的架构。该方法被建在网络结构中应用,用于不同的网络结构结构中进行应用,并用于前沿科学机器学习常规模型的预测。