We propose a non-intrusive, reduced-basis, and data-driven method for approximating both eigenvalues and eigenvectors in parametric eigenvalue problems. We generate the basis of the reduced space by applying the proper orthogonal decomposition (POD) approach on a collection of pre-computed, full-order snapshots at a chosen set of parameters. Then, we use Bayesian linear regression (a.k.a. Gaussian Process Regression) in the online phase to predict both eigenvalues and eigenvectors at new parameters. A split of the data generated in the offline phase into training and test data sets is utilized in the numerical experiments following standard practices in the field of supervised machine learning. Furthermore, we discuss the connection between Gaussian Process Regression and spline methods, and compare the performance of GPR method against linear and cubic spline methods. We show that GPR outperforms other methods for functions with a certain regularity. To this end, we discuss various different covariance functions which influence the performance of GPR. The proposed method is shown to be accurate and efficient for the approximation of multiple 1D and 2D affine and non-affine parameter-dependent eigenvalue problems that exhibit crossing of eigenvalues.
翻译:我们提出了一种非侵入式、简约基础和数据驱动的方法,用于在参数化特征值问题中近似特征值和特征向量。我们通过在一系列预先计算好的、选定参数集上的完全阶快照(snapshots)上应用适当的正交分解(POD)方法,生成简约空间的基。然后,在在线阶段中,我们使用贝叶斯线性回归(也称为高斯过程回归),来预测新参数的特征值和特征向量。在数值实验中,我们遵循监督式机器学习领域的标准做法,将离线阶段生成的数据分割为训练和测试数据集。此外,我们讨论了高斯过程回归与样条方法之间的联系,并比较了 GPR 方法与线性和三次样条方法的性能。我们展示了对于具有某种规则性的函数,GPR 方法胜过其他方法。为此,我们讨论了各种不同的协方差函数,这些函数影响了 GPR 的性能。所提出的方法在近似多个1D和2D仿射和非仿射参数相关特征值问题时展现出高精度和高效性,它们包括特征值交叉现象。