Deep models have achieved impressive progress in solving partial differential equations (PDEs). A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs. While previous deep models have explored the multiscale architectures and various operator designs, they are limited to learning the operators as a whole in the coordinate space. In real physical science problems, PDEs are complex coupled equations with numerical solvers relying on discretization into high-dimensional coordinate space, which cannot be precisely approximated by a single operator nor efficiently learned due to the curse of dimensionality. We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs. Going beyond the coordinate space, LSM enables an attention-based hierarchical projection network to reduce the high-dimensional data into a compact latent space in linear time. Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space that approximates complex input-output mappings via learning multiple basis operators, enjoying nice theoretical guarantees for convergence and approximation. Experimentally, LSM achieves consistent state-of-the-art and yields a relative error reduction of 11.5% averaged on seven benchmarks covering both solid and fluid physics.
翻译:深度模型已在偏微分方程(PDE)求解方面取得显著进展。一种新兴的方法是学习神经算符来逼近PDE的输入输出映射。尽管以前的深度模型已经探索了多尺度结构和各种算符设计,但它们仅限于在坐标空间中以整体学习算符。在真实的物理科学问题中,PDE是复杂的耦合方程,数值求解器依赖于高维坐标空间的离散化,由于高维度的“诅咒”,无法精确地近似单个算符,也无法有效地学习。我们提出了潜在谱模型(LSM)来实现高维PDE的高效和精确求解。LSM超越了坐标空间,使用基于注意力的层次投影网络将高维数据缩小到线性时间内的紧凑潜在空间中。受数值分析中经典的谱方法启发,我们设计了一个神经谱块在潜在空间中解决PDE,通过学习多个基本算符逼近复杂的输入输出映射,享受收敛和逼近的良好理论保证。 在实验上,LSM取得了持续的最先进结果,并在涵盖固体和流体物理学领域的七个基准测试中实现了11.5%的相对误差降低。