We propose a non-intrusive, reduced-basis, and data-driven method for approximating both eigenvalues and eigenvectors in parametric eigenvalue problems. We generate the basis of the reduced space by applying the proper orthogonal decomposition (POD) approach on a collection of pre-computed, full-order snapshots at a chosen set of parameters. Then, we use Bayesian linear regression (a.k.a. Gaussian Process Regression) in the online phase to predict both eigenvalues and eigenvectors at new parameters. A split of the data generated in the offline phase into training and test data sets is utilized in the numerical experiments following standard practices in the field of supervised machine learning. Furthermore, we discuss the connection between Gaussian Process Regression and spline methods, and compare the performance of GPR method against linear and cubic spline methods. We show that GPR outperforms other methods for functions with a certain regularity. To this end, we discuss various different covariance functions which influence the performance of GPR. The proposed method is shown to be accurate and efficient for the approximation of multiple 1D and 2D affine and non-affine parameter-dependent eigenvalue problems that exhibit crossing of eigenvalues.
翻译:我们提出了一种非侵入式、减少基础和数据驱动的方法,用于近似参数特征值问题中的特征值和特征向量。我们使用适当的正交分解(POD)方法在一组选择的参数上预先计算的完整快照集上生成缩减空间的基础。然后,我们在在线阶段使用贝叶斯线性回归(即高斯过程回归)来预测新参数下的特征值和特征向量。采用离线阶段生成的数据拆分为训练和测试数据集,遵循监督机器学习领域的标准实践。此外,我们还讨论了高斯过程回归与样条方法之间的联系,并比较了GPR方法与线性和三次样条方法的性能。我们展示了在特定规则性的函数中,GPR方法优于其他方法。为此,我们讨论了各种不同的协方差函数,这些函数影响了GPR的性能。所提出的方法被证明对于表现出特征值交叉的多个1D和2D仿射和非仿射参数依赖特征值问题的近似是准确和高效的。