Expected improvement (EI) is one of the most widely used acquisition functions in Bayesian optimization (BO). Despite its proven success in applications for decades, important open questions remain on the theoretical convergence behaviors and rates for EI. In this paper, we contribute to the convergence theory of EI in three novel and critical areas. First, we consider objective functions that fit under the Gaussian process (GP) prior assumption, whereas existing works mostly focus on functions in the reproducing kernel Hilbert space (RKHS). Second, we establish for the first time the asymptotic error bound and its corresponding rate for GP-EI with noisy observations under the GP prior assumption. Third, by investigating the exploration and exploitation properties of the non-convex EI function, we establish improved error bounds of GP-EI for both the noise-free and noisy cases.
翻译:暂无翻译