The theoretical understanding of MOEAs is lagging far behind their success in practice. In particular, previous theory work considers mostly easy problems that are composed of unimodal objectives. As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the OJZJ problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark. We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime. In contrast, for all problem sizes $n$ and all jump sizes ${k \in [4..\frac n2 - 1]}$, the global SEMO (GSEMO) covers the Pareto front in an expected number of $\Theta((n-2k)n^{k})$ iterations. For $k = o(n)$, we also show the tighter bound $\frac 32 e n^{k+1} \pm o(n^{k+1})$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms. We also combine the GSEMO with two approaches that showed advantages in single-objective multimodal problems. When using the GSEMO with a heavy-tailed mutation operator, the expected runtime improves by a factor of at least $k^{\Omega(k)}$. When adapting the recent stagnation-detection strategy of Rajabi and Witt (2022) to the GSEMO, the expected runtime also improves by a factor of at least $k^{\Omega(k)}$ and surpasses the heavy-tailed GSEMO by a small polynomial factor in $k$. Via an experimental analysis, we show that these asymptotic differences are visible already for small problem sizes: A factor-$5$ speed-up from heavy-tailed mutation and a factor-$10$ speed-up from stagnation detection can be observed already for jump size~$4$ and problem sizes between $10$ and $50$. Overall, our results show that the ideas recently developed to aid single-objective evolutionary algorithms to cope with local optima can be effectively employed also in multiobjective optimization.
翻译:经验分析表明,多目标进化算法(MOEAs)在实践中的成功远远超过了理论上的理解。特别是,以往的理论研究主要考虑由单峰目标组成的简单问题。为了更深入地了解进化算法如何解决多模态多目标问题,本文提出了OJZJ问题,它是由两个等同于经典跳跃函数基准的目标组成的双目标问题。本文证明,对于所有的问题规模$n$和跳跃大小$\{k \in [4..\frac {n}{2} - 1]\}$,带概率的单点进化算法(SEMO)都不能计算出完整的帕累托前沿,无论运行时间是多少。相反,对于所有的问题规模$n$和跳跃大小$\{k \in [4..\frac {n}{2} - 1]\}$,全局SEMO(GSEMO)在期望迭代次数为$\Theta((n-2k)n^{k})$的情况下覆盖帕累托前沿。对于$k=o(n)$的情况,我们还展示了更紧密的界限$\frac 32 e n^{k+1}\pm o(n^{k+1})$,它可能是第一个除了低阶项以外都是紧密的MOEA运行时间界限。我们还将GSEMO与两种在单目标多模态问题中表现出优势的方法相结合。当使用GSEMO和重尾突变算子时,期望运行时间将提高至少$k^{\Omega(k)}$倍。当将Rajabi和Witt(2022)最近的停滞检测策略与GSEMO相结合时,期望运行时间也将提高至少$k^{\Omega(k)}$倍,并且在$k$中的小多项式因子上超过重尾GSEMO。通过实验分析,我们表明这些渐近差异已经在小问题规模中显示出来:对于跳跃大小为4和问题规模在10到50之间的问题,从重尾突变中可以观察到5倍的速度提高,而从停滞检测中可以观察到10倍的速度提高。总体来说,我们的结果表明,最近为帮助单目标进化算法应对局部最优点而开发的思路也可以有效地应用于多目标优化中。