Efficient Global Optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees.In this work, we propose and analyze a trust-region-like EGO method (TREGO). TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), we demonstrate that our algorithm enjoys strong global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO benchmark, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art global optimization methods.
翻译:高效全球优化(EGO)是巴伊西亚优化的典型形式,已经成功地用于解决全球优化昂贵到评估黑箱问题。然而,EGO在规模上挣扎,并提供有限的理论保障。 在这项工作中,我们提出并分析类似于信任区域的EGO方法(TREGO)。TREGO在信任区域内的常规EGO步骤和地方步骤之间交替。我们遵循信任区域的经典计划(基于足够的减价条件),表明我们的算法具有强大的全球趋同特性,而从EGO出发,只是为了一个优化步骤。我们首先根据众所周知的COCO基准进行广泛的数字实验,我们首先分析TREGO对自身参数的敏感性,然后表明由此产生的算法一直比EGO高,并与其他最先进的全球优化方法竞争。