Accurately solving partial differential equations (PDEs) is essential across many scientific disciplines. However, high-fidelity solvers can be computationally prohibitive, motivating the development of reduced-order models (ROMs). Recently, Latent Space Dynamics Identification (LaSDI) was proposed as a data-driven, non-intrusive ROM framework. LaSDI compresses the training data via an autoencoder and learns user-specified ordinary differential equations (ODEs), governing the latent dynamics, enabling rapid predictions for unseen parameters. While LaSDI has produced effective ROMs for numerous problems, the autoencoder must simultaneously reconstruct the training data and satisfy the imposed latent dynamics, which are often competing objectives that limit accuracy, particularly for complex or high-frequency phenomena. To address this limitation, we propose multi-stage Latent Space Dynamics Identification (mLaSDI). With mLaSDI, we train LaSDI sequentially in stages. After training the initial autoencoder, we train additional decoders which map the latent trajectories to residuals from previous stages. This staged residual learning, combined with periodic activation functions, enables recovery of high-frequency content without sacrificing interpretability of the latent dynamics. Numerical experiments on a multiscale oscillating system, unsteady wake flow, and the 1D-1V Vlasov equation demonstrate that mLaSDI achieves significantly lower reconstruction and prediction errors, often by an order of magnitude, while requiring less training time and reduced hyperparameter tuning compared to standard LaSDI.
翻译:精确求解偏微分方程(PDEs)在众多科学领域至关重要。然而,高保真求解器的计算成本可能过高,这推动了降阶模型(ROMs)的发展。最近,潜在空间动力学辨识(LaSDI)被提出作为一种数据驱动的非侵入式ROM框架。LaSDI通过自编码器压缩训练数据,并学习用户指定的、支配潜在动力学的常微分方程(ODEs),从而实现对未见参数的快速预测。尽管LaSDI已为众多问题构建了有效的ROM,但自编码器必须同时重构训练数据并满足所施加的潜在动力学,这通常是相互竞争的目标,会限制精度,尤其对于复杂或高频现象。为克服这一局限,我们提出了多阶段潜在空间动力学辨识(mLaSDI)。在mLaSDI中,我们分阶段顺序训练LaSDI。在训练初始自编码器后,我们训练额外的解码器,将潜在轨迹映射到前一阶段的残差。这种分阶段残差学习,结合周期性激活函数,能够在保持潜在动力学可解释性的同时恢复高频内容。在多尺度振荡系统、非定常尾流以及一维速度空间Vlasov方程上的数值实验表明,与标准LaSDI相比,mLaSDI实现了显著更低的重构和预测误差(通常降低一个数量级),同时需要更少的训练时间和更少的超参数调整。