Bayesian optimization is a powerful collection of methods for optimizing stochastic expensive black box functions. One key component of a Bayesian optimization algorithm is the acquisition function that determines which solution should be evaluated in every iteration. A popular and very effective choice is the Knowledge Gradient acquisition function, however there is no analytical way to compute it. Several different implementations make different approximations. In this paper, we review and compare the spectrum of Knowledge Gradient implementations and propose One-shot Hybrid KG, a new approach that combines several of the previously proposed ideas and is cheap to compute as well as powerful and efficient. We prove the new method preserves theoretical properties of previous methods and empirically show the drastically reduced computational overhead with equal or improved performance. All experiments are implemented in BOTorch and code is available on github.
翻译:Bayesian 优化是一套精密的精密昂贵黑盒功能的强大方法。 Bayesian 优化算法的一个关键组成部分是获取功能,该功能决定了在每个迭代中应当评价哪种解决方案。一个普遍而有效的选择是知识渐渐获取功能,然而没有分析方法来计算它。一些不同的实施产生了不同的近似值。在本文件中,我们审查并比较了知识渐渐落实的广度,并提出了一次性的混合 KG,这是一个新方法,它综合了以前提出的若干想法,并且可以低廉地进行计算,而且能够强大和高效地进行计算。我们证明新方法保留了以往方法的理论特性,并以经验方式表明以平等或改进的性能大幅度降低计算间接费用。所有实验都在BOTorrch实施,并在 Github 上提供代码。