Bayesian optimization is a methodology for global optimization of unknown and expensive objectives. It combines a surrogate Bayesian regression model with an acquisition function to decide where to evaluate the objective. Typical regression models are given by Gaussian processes with stationary covariance functions. However, these functions are unable to express prior input-dependent information, including possible locations of the optimum. The ubiquity of stationary models has led to the common practice of exploiting prior information via informative mean functions. In this paper, we highlight that these models can perform poorly, especially in high dimensions. We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space and adaptively promote local exploration during optimization. We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
翻译:贝叶斯优化是一种用于全局优化未知且昂贵的目标的方法。它将代理贝叶斯回归模型与收购函数相结合,以决定在哪里进行目标评估。典型的回归模型由具有稳定协方差函数的高斯过程组成。然而,这些函数无法表达先验的输入相关信息,包括最优解的可能位置。常规模型的普遍性导致了利用信息量高的平均函数来开发先验信息的常见做法。在本文中,我们强调这些模型可能表现不佳,特别是对于高维度的情况。我们提出了一种新的信息协方差函数来进行优化,利用非平稳性来编码对搜索空间中某些区域的偏好,并在优化期间自适应地促进局部探索。我们证明了该函数可以提高贝叶斯优化在高维的采样效率,即使有弱的先验信息。