Learning to optimize (L2O) has emerged as a powerful framework for black-box optimization (BBO). L2O learns the optimization strategies from the target task automatically without human intervention. This paper focuses on obtaining better performance when handling high-dimensional and expensive BBO with little function evaluation cost, which is the core challenge of black-box optimization. However, current L2O-based methods are weak for this due to a large number of evaluations on expensive black-box functions during training and poor representation of optimization strategy. To achieve this, 1) we utilize the cheap surrogate functions of the target task to guide the design of the optimization strategies; 2) drawing on the mechanism of evolutionary algorithm (EA), we propose a novel framework called B2Opt, which has a stronger representation of optimization strategies. Compared to the BBO baselines, B2Opt can achieve 3 to $10^6$ times performance improvement with less function evaluation cost. We test our proposal in high-dimensional synthetic functions and two real-world applications. We also find that deep B2Opt performs better than shallow ones.
翻译:暂无翻译