Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the large inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, we propose a new method in this paper, called sliced GE-Kriging (SGE-Kriging) for reducing both the size of the correlation matrix and the number of hyper-parameters. Firstly, we perform a derivative-based global sensitivity analysis to detect the relative importance of each input variable with respect to model response. Then, we propose to split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set. Additionally, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the global sensitivity indices. Finally, we validate SGE-Kriging by means of numerical experiments with several benchmarks problems. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident in high-dimensional problems.
翻译:为了解决这些问题,我们在本文件中提出了一个新的方法,称为切片-加强克里金(GE-Kriging),用于降低相关矩阵的大小和超参数的数量。首先,我们进行了基于衍生物的全球敏感度分析,以查明每个输入变量对于模型响应的相对重要性。然后,我们提议将培训样本分成多个切片,并援引拜斯的理论,通过切片可能性功能来估计全部可能性功能,其中利用多个小关联矩阵来描述成套样本的关联性。此外,我们用一个低维度的对应方来取代原先的高度超参数调问题,方法是学习超度参数和全球敏感度指数之间的关系。最后,我们用一个不甚清晰的模型来验证SGE-K的精确度,这是通过一个不那么高的模型来验证高的精确度标准。我们用一个非常可靠的标准来验证SGEGE-K的精确度测试结果。