Bayesian Optimization (BO) links Gaussian Process (GP) surrogates with sequential design toward optimizing expensive-to-evaluate black-box functions. Example design heuristics, or so-called acquisition functions, like expected improvement (EI), balance exploration and exploitation to furnish global solutions under stringent evaluation budgets. However, they fall short when solving for robust optima, meaning a preference for solutions in a wider domain of attraction. Robust solutions are useful when inputs are imprecisely specified, or where a series of solutions is desired. A common mathematical programming technique in such settings involves an adversarial objective, biasing a local solver away from ``sharp'' troughs. Here we propose a surrogate modeling and active learning technique called robust expected improvement (REI) that ports adversarial methodology into the BO/GP framework. After describing the methods, we illustrate and draw comparisons to several competitors on benchmark synthetic and real problems of varying complexity.
翻译:Bayesian优化(BO)将Gaussian进程(GP)与连续设计相连接,以优化昂贵到评估黑盒功能。例如设计超常,或所谓的购置功能,如预期改进(EI),平衡勘探和开发,以便在严格的评估预算下提供全球解决方案。然而,在解决稳健的opima时,这些功能不足,意味着在更广泛的吸引领域偏好解决方案。当投入被不准确地指定或需要一系列解决方案时,强有力的解决方案是有用的。在这种环境下,共同的数学编程技术涉及对抗性目标,偏向一个远离“sharp't troughs”的本地解决方案。在这里,我们建议一种代用模型和积极的学习技术,即将港口对抗方法强有力的预期改进(REI)纳入BO/GP框架。在描述这些方法之后,我们向一些竞争者说明并进行比较,以衡量复杂程度不同的合成和实际问题的基准。