Multi-objective Bayesian optimization aims to find the Pareto front of optimal trade-offs between a set of expensive objectives while collecting as few samples as possible. In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective. This presents an opportunity to learn the Pareto front faster by evaluating the cheaper objectives more frequently. We propose a scalarization based knowledge gradient acquisition function which accounts for the different evaluation costs of the objectives. We prove consistency of the algorithm and show empirically that it significantly outperforms a benchmark algorithm which always evaluates both objectives.
翻译:多目标巴耶斯优化旨在找到帕雷托在一套昂贵目标之间最佳权衡的前沿,同时收集尽可能多的样本,在某些情况下,可以分别评估目标,而每个目标可以有不同的延迟或评估成本。这提供了一个机会,通过更频繁地评估更廉价的目标,更快地学习帕雷托前沿。我们提议基于知识梯度的升级化获取功能,用于计算目标的不同评估成本。我们证明算法的一致性,并用经验证明它大大超过总是评估两个目标的基准算法。