The expected improvement (EI) is one of the most popular acquisition functions for Bayesian optimization (BO) and has demonstrated good empirical performances in many applications for the minimization of simple regret. However, under the evaluation metric of cumulative regret, the performance of EI may not be competitive, and its existing theoretical regret upper bound still has room for improvement. To adapt the EI for better performance under cumulative regret, we introduce a novel quantity called the evaluation cost which is compared against the acquisition function, and with this, develop the expected improvement-cost (EIC) algorithm. In each iteration of EIC, a new point with the largest acquisition function value is sampled, only if that value exceeds its evaluation cost. If none meets this criteria, the current best point is resampled.This evaluation cost quantifies the potential downside of sampling a point, which is important under the cumulative regret metric as the objective function value in every iteration affects the performance measure. We further establish in theory a near-optimal regret upper bound of EIC for the squared-exponential covariance kernel under mild regularity conditions, and perform experiments to illustrate the improvement of EIC over several popular BO algorithms.
翻译:预期的改进(EI)是巴伊西亚优化(BO)最流行的获取功能之一,并在许多应用中表现出良好的实绩,以尽量减少简单的遗憾。然而,根据累积遗憾的评价标准,EI的表现可能不具竞争力,其现有的理论上的上层遗憾仍然有改进的余地。为了使EI适应在累积遗憾下更好的业绩,我们引入了一个新的数量,即与获取功能相比,评价成本,以此来开发预期的改进成本算法。在EIC的每一次迭代中,都取样了一个具有最大获取功能值的新点,只有该值超过其评价成本时,才进行该值的抽样。如果没有达到这一标准,则目前的最佳点将重新标出。这种评价成本将取样的潜在下行量量进行微分,在累积遗憾衡量中,这一点很重要,因为每个版本的客观功能值影响绩效计量。我们从理论上进一步确定EIC的接近最佳的上层遗憾,用于在温和的正常条件下对数度超常变差量进行模拟,并进行普通的测试,以显示EIC的BA的改进。