We propose a non-intrusive, reduced-basis, and data-driven method for approximating both eigenvalues and eigenvectors in parametric eigenvalue problems. We generate the basis of the reduced space by applying the proper orthogonal decomposition (POD) approach on a collection of pre-computed, full-order snapshots at a chosen set of parameters. Then, we use Bayesian linear regression (a.k.a. Gaussian Process Regression) in the online phase to predict both eigenvalues and eigenvectors at new parameters. A split of the data generated in the offline phase into training and test data sets is utilized in the numerical experiments following standard practices in the field of supervised machine learning. Furthermore, we discuss the connection between Gaussian Process Regression and spline methods, and compare the performance of GPR method against linear and cubic spline methods. We show that GPR outperforms other methods for functions with a certain regularity. To this end, we discuss various different covariance functions which influence the performance of GPR. The proposed method is shown to be accurate and efficient for the approximation of multiple 1D and 2D affine and non-affine parameter-dependent eigenvalue problems that exhibit crossing of eigenvalues.
翻译:我们提出了一种非侵入式、降维和数据驱动的方法,用于近似参数特征值问题中的特征值和特征向量。我们通过在一组选择的参数下预先计算的全阶快照集上应用适当的正交分解(POD)方法来生成降维空间的基础。然后,我们在在线阶段使用贝叶斯线性回归(即高斯过程回归)来预测新参数的特征值和特征向量。我们利用离线阶段生成的数据分裂成训练和测试数据集,在数值实验中遵循监督机器学习领域的标准实践。此外,我们讨论了高斯过程回归和样条方法之间的关系,并比较了GPR方法与线性和三次样条方法的性能。我们表明GPR在某种规则性的函数情况下优于其他方法。为此,我们讨论了各种不同的协方差函数,它们影响GPR的性能。所提出的方法在近似具有交叉特征值的多个1D和2D仿射和非仿射参数依赖特征值问题方面表现出准确性和高效性。