Bayesian Optimisation (BO) methods seek to find global optima of objective functions which are only available as a black-box or are expensive to evaluate. Such methods construct a surrogate model for the objective function, quantifying the uncertainty in that surrogate through Bayesian inference. Objective evaluations are sequentially determined by maximising an acquisition function at each step. However, this ancilliary optimisation problem can be highly non-trivial to solve, due to the non-convexity of the acquisition function, particularly in the case of batch Bayesian optimisation, where multiple points are selected in every step. In this work we reformulate batch BO as an optimisation problem over the space of probability measures. We construct a new acquisition function based on multipoint expected improvement which is convex over the space of probability measures. Practical schemes for solving this `inner' optimisation problem arise naturally as gradient flows of this objective function. We demonstrate the efficacy of this new method on different benchmark functions and compare with state-of-the-art batch BO methods.
翻译:Bayesian最佳化(BO)方法力求找到客观功能的全球选择,这些功能只能作为黑盒或评估费用昂贵,这些方法为客观功能建立一个替代模型,通过Bayesian推论量化替代功能的不确定性。客观评估依次确定,每个步骤的获取功能最大化。然而,由于获取功能的非稳定性,特别是每步选取多个点的Bayesian优化,这种逻辑优化问题可能非常非三重性,难以解决。在这项工作中,我们将批次BO重新配置为概率测量空间的优化问题。我们根据多点预期的改进,在概率测量空间上形成一个新的获取功能。解决这一`内部'优化问题的实用计划自然会随着该目标功能的梯度流动而产生。我们展示了这一新方法在不同基准功能上的有效性,并与最新批次BO方法进行比较。