Scientific machine learning for inferring dynamical systems combines data-driven modeling, physics-based modeling, and empirical knowledge. It plays an essential role in engineering design and digital twinning. In this work, we primarily focus on an operator inference methodology that builds dynamical models, preferably in low-dimension, with a prior hypothesis on the model structure, often determined by known physics or given by experts. Then, for inference, we aim to learn the operators of a model by setting up an appropriate optimization problem. One of the critical properties of dynamical systems is stability. However, this property is not guaranteed by the inferred models. In this work, we propose inference formulations to learn quadratic models, which are stable by design. Precisely, we discuss the parameterization of quadratic systems that are locally and globally stable. Moreover, for quadratic systems with no stable point yet bounded (e.g., chaotic Lorenz model), we discuss how to parameterize such bounded behaviors in the learning process. Using those parameterizations, we set up inference problems, which are then solved using a gradient-based optimization method. Furthermore, to avoid numerical derivatives and still learn continuous systems, we make use of an integral form of differential equations. We present several numerical examples, illustrating the preservation of stability and discussing its comparison with the existing state-of-the-art approach to infer operators. By means of numerical examples, we also demonstrate how the proposed methods are employed to discover governing equations and energy-preserving models.
翻译:暂无翻译