Optimization algorithms are very different from human optimizers. A human being would gain more experiences through problem-solving, which helps her/him in solving a new unseen problem. Yet an optimization algorithm never gains any experiences by solving more problems. In recent years, efforts have been made towards endowing optimization algorithms with some abilities of experience learning, which is regarded as experience-based optimization. In this paper, we argue that hard optimization problems could be tackled efficiently by making better use of experiences gained in related problems. We demonstrate our ideas in the context of expensive optimization, where we aim to find a near-optimal solution to an expensive optimization problem with as few fitness evaluations as possible. To achieve this, we propose an experience-based surrogate-assisted evolutionary algorithm (SAEA) framework to enhance the optimization efficiency of expensive problems, where experiences are gained across related expensive tasks via a novel meta-learning method. These experiences serve as the task-independent parameters of a deep kernel learning surrogate, then the solutions sampled from the target task are used to adapt task-specific parameters for the surrogate. With the help of experience learning, competitive regression-based surrogates can be initialized using only 1$d$ solutions from the target task ($d$ is the dimension of the decision space). Our experimental results on expensive multi-objective and constrained optimization problems demonstrate that experiences gained from related tasks are beneficial for the saving of evaluation budgets on the target problem.
翻译:优化算法与人类优化者有很大的不同。通过解决问题,人类可以获得更多的经验,这有助于解决新的未知问题。然而,优化算法从不通过解决更多问题来获得任何经验。近年来,人们开始着手将经验学习的能力赋予优化算法,这被视为经验驱动的优化。本文认为,在相关问题中更好地利用获得的经验可以有效解决难以优化的问题。本文在昂贵优化的背景下阐述了这一观点,旨在尽可能少地进行适应度评估,以找到一个接近最优解的昂贵优化问题。为此,我们提出了一个经验驱动的代理辅助进化算法(SAEA)框架,通过一种新颖的元学习方法在相关昂贵任务中获得经验,从而增强了昂贵问题的优化效率。这些经验作为深度核学习代理的任务无关参数,然后从目标任务中采样的解用于适应代理的任务特定参数。通过经验学习,竞争性的基于回归的代理可以使用来自目标任务的1$d$解进行初始化($d$是决策空间的维度)。我们在昂贵多目标和约束优化问题上的实验结果表明,从相关任务中获得的经验对于在目标问题上节约评估预算是有益的。