Bayesian optimization (BO) with Gaussian processes (GP) as surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose Prior-mean-RObust Bayesian Optimization (PROBO) that outperforms classical BO on specific problems. First, we study the effect of the Gaussian processes' prior specifications on classical BO's convergence. We find the prior's mean parameters to have the highest influence on convergence among all prior components. In response to this result, we introduce PROBO as a generalization of BO that aims at rendering the method more robust towards prior mean parameter misspecification. This is achieved by explicitly accounting for GP imprecision via a prior near-ignorance model. At the heart of this is a novel acquisition function, the generalized lower confidence bound (GLCB). We test our approach against classical BO on a real-world problem from material science and observe PROBO to converge faster. Further experiments on multimodal and wiggly target functions confirm the superiority of our method.
翻译:以Gaussian进程(GP)为替代模型的巴伊西亚优化 (BO) 被广泛用于优化分析上未知的、昂贵的到评估的功能。 在本文中,我们提出在具体问题上优于经典BO的优于典型BO的优先度(PROBO) 。 首先,我们研究Gausian进程先前的规格对经典BO趋同的影响。 我们发现,先前的平均参数对先前所有组成部分的趋同影响最大。 为了应对这一结果,我们引入PROBO, 将其作为BO的概括化, 目的是使该方法对先前的中值参数错误区分更加有力。 实现这一点的方法是通过先前的近光度模型明确计算GP不精确度。 核心是新颖的获取功能, 普遍的低信任约束(GLCB) 。 我们测试了我们从物质科学到真实世界问题的典型BO方法, 并观察PROBO更快的趋同。 进一步进行多式和假形目标功能的实验证实了我们的方法的优越性。