Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, $q$NEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size. $q$NEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and we show that it can be optimized effectively with gradient-based methods via sample average approximation. Empirically, we demonstrate not only that $q$NEHVI is substantially more robust to observation noise than existing MOBO approaches, but also that it achieves state-of-the-art optimization performance and competitive wall-times in large-batch environments.
翻译:优化多种相互竞争的黑箱目标是许多领域,包括科学、工程和机器学习领域的一个棘手问题。多目标贝叶斯优化(MOBO)是确定目标之间最佳权衡的样本效率高的方法。然而,在观测被噪音破坏时,许多现有方法效果不佳。我们提出一个新的购置功能NEHVI,即NEHVI,通过将巴伊斯处理方法应用于人们所期望的高容量改进标准(EHVI),并纳入帕雷托边境的这一不确定性,克服了这一重要的实际限制。我们认为,即使在无噪音环境下,同时产生多种候选人也是EHVI的化方法,在Pareto前沿具有不确定性,因此可以使用同样的基础技术加以解决。我们从这个角度得出一个自然平行的变体,即$Q$NEHVI,通过批量的指数从指数到超量量改进平行EHVI的计算复杂性。$HVI是一步不动的BAYA,在平均和无噪音环境中都具有超量最大化,我们只能通过高压的模量的汇率观测方式,我们只能通过高压的模化方式将EVI转化为。