This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside some ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert spaces attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial.
翻译:这项工作提出了在高西亚进程模型中获取预测分布的新程序,放宽了某些利益范围以外的内插限制:预测分布的平均值在人们感兴趣的范围以外,不再必然地将观察到的值内插,而只是局限于外部。这个称为放松高西亚进程(reGP)的内插法在利益范围方面提供了更好的预测分布,特别是在GOP模型的定点性假设不合适的情况下。它可以被视为一种面向目标的方法,在Bayesian优化中变得特别有趣,例如,在目标功能的最小化方面,预测分布的平均值非常重要。当预期的改进标准和再分配用于按顺序选择评价点时,由此产生的优化算法的趋同在理论上是有保证的(条件是,最优化的功能在于生产与Gausian进程已知的共差相关的内核室空间。 实验表明,使用再定位的GP而不是Bayesian最优化中的固定GP模型是有利的。