Bayesian optimization is a popular framework for the optimization of black box functions. Multifidelity methods allows to accelerate Bayesian optimization by exploiting low-fidelity representations of expensive objective functions. Popular multifidelity Bayesian strategies rely on sampling policies that account for the immediate reward obtained evaluating the objective function at a specific input, precluding greater informative gains that might be obtained looking ahead more steps. This paper proposes a non-myopic multifidelity Bayesian framework to grasp the long-term reward from future steps of the optimization. Our computational strategy comes with a two-step lookahead multifidelity acquisition function that maximizes the cumulative reward obtained measuring the improvement in the solution over two steps ahead. We demonstrate that the proposed algorithm outperforms a standard multifidelity Bayesian framework on popular benchmark optimization problems.
翻译:Bayesian 优化是优化黑匣子功能的流行框架。 多种方济各会方法通过利用昂贵客观功能的低方方济各表现来加快Bayesian优化。 大众多方济各会战略依靠抽样政策,即当即获得奖励,根据具体投入对目标功能进行评估,从而排除了更多信息收益,而这种收益可以展望更多的步骤。 本文提议了一个非中性多方济各会巴耶斯框架,以便从未来优化步骤中获取长期回报。 我们的计算战略是,双步长相见的多方济各会获取功能,最大限度地增加累积收益,衡量未来两步解决方案的改进程度。 我们证明,拟议的算法在普及基准优化问题上超越了标准的多方济各巴伊斯框架。