Bayesian optimization (BO) is a powerful framework for optimizing black-box, expensive-to-evaluate functions. Over the past decade, many algorithms have been proposed to integrate cheaper, lower-fidelity approximations of the objective function into the optimization process, with the goal of converging towards the global optimum at a reduced cost. This task is generally referred to as multi-fidelity Bayesian optimization (MFBO). However, MFBO algorithms can lead to higher optimization costs than their vanilla BO counterparts, especially when the low-fidelity sources are poor approximations of the objective function, therefore defeating their purpose. To address this issue, we propose rMFBO (robust MFBO), a methodology to make any GP-based MFBO scheme robust to the addition of unreliable information sources. rMFBO comes with a theoretical guarantee that its performance can be bound to its vanilla BO analog, with high controllable probability. We demonstrate the effectiveness of the proposed methodology on a number of numerical benchmarks, outperforming earlier MFBO methods on unreliable sources. We expect rMFBO to be particularly useful to reliably include human experts with varying knowledge within BO processes.
翻译:Bayesian优化(BO)是优化黑箱、昂贵到评估功能的强大框架。在过去的十年中,许多算法被提议将目标功能的更便宜、更低纤维化近似值纳入优化进程,目标是以更低的成本凝聚全球最佳化。这项任务通常被称为多纤维化巴伊西亚优化(MFBO),但MFBO算法可能导致比其香草BO对等方更高的优化成本,特别是当低纤维化来源对目标功能的近似性差,从而挫败了目标功能的目的。为了解决这一问题,我们提议采用RMFBO(ROBET MFBO)这一方法,使任何以GP为基础的MBO(MBO)计划与不可靠的信息来源相匹配。 rMFBO具有理论保证,其性能可以与香草BO的模拟挂钩,而且极有可能控制。我们证明,所提议的方法在数量基准上的效力,超过了MFBO先前对不可靠的来源采用的方法。我们期望RMBO(RMBO)将特别有用,以便可靠地将具有不同知识的人类专家纳入BO内。