Operator regression provides a powerful means of constructing discretization-invariant emulators for partial-differential equations (PDEs) describing physical systems. Neural operators specifically employ deep neural networks to approximate mappings between infinite-dimensional Banach spaces. As data-driven models, neural operators require the generation of labeled observations, which in cases of complex high-fidelity models result in high-dimensional datasets containing redundant and noisy features, which can hinder gradient-based optimization. Mapping these high-dimensional datasets to a low-dimensional latent space of salient features can make it easier to work with the data and also enhance learning. In this work, we investigate the latent deep operator network (L-DeepONet), an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders. We illustrate that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs, e.g., modeling the growth of fracture in brittle materials, convective fluid flows, and large-scale atmospheric flows exhibiting multiscale dynamical features.
翻译:运算器回归提供了构建离散不变的偏微分方程描述物理系统的模拟器的强大手段。神经算子具体应用深度神经网络来逼近无限维Banach空间之间的映射。作为数据驱动模型,神经算子需要产生标签观测值,而在高保真模型的复杂案例中,这些标签观测值往往形成高维度数据集,其中包含冗余和噪声特征,这可能会妨碍基于梯度的优化。将这些高维度的数据集映射到低维度的显著特征潜在空间中可以使数据处理更加容易,同时也增强学习效果。在本研究中,我们调查了潜在深度算子网络(L-DeepONet),它是标准DeepONet的扩展,利用合适的自编码器识别高维度的偏微分方程的输入和输出函数的潜在表示。我们说明在各种时间相关的偏微分方程中,如建模脆性材料中的断裂增长,流体对流和多尺度动态特征展现出的大规模大气流中,L-DeepONet在准确度和计算效率方面均优于标准方法。