Efficient Global Optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO {bound constrained problems}, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black-box optimization methods.
翻译:高效全球优化(EGO)是巴伊西亚优化的典型形式,成功地用于解决全球优化昂贵到评估黑盒问题。然而,EGO在规模上挣扎,提供有限的理论保障。在这项工作中,提出并分析EGO(TREGO)的托管区域框架。TREGO在信任区域内的常规EGO步骤和地方步骤之间互换。通过遵循信任区域的经典计划(基于足够的减价条件),提议的算法具有全球趋同特性,而从EGO出发,只用于一组优化步骤。我们首先利用以众所周知的COCO {受限制的问题}为基础的大量数字实验,分析TREGO对自身参数的敏感性,然后表明由此产生的算法一直比EGO高,并与其他最先进的黑盒优化方法竞争。