The theoretical understanding of MOEAs is lagging far behind their success in practice. In particular, previous theory work considers mostly easy problems that are composed of unimodal objectives. As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the OJZJ problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark. We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime. In contrast, for all problem sizes $n$ and all jump sizes ${k \in [4..\frac n2 - 1]}$, the global SEMO (GSEMO) covers the Pareto front in an expected number of $\Theta((n-2k)n^{k})$ iterations. For $k = o(n)$, we also show the tighter bound $\frac 32 e n^{k+1} \pm o(n^{k+1})$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms. We also combine the GSEMO with two approaches that showed advantages in single-objective multimodal problems. When using the GSEMO with a heavy-tailed mutation operator, the expected runtime improves by a factor of at least $k^{\Omega(k)}$. When adapting the recent stagnation-detection strategy of Rajabi and Witt (2022) to the GSEMO, the expected runtime also improves by a factor of at least $k^{\Omega(k)}$ and surpasses the heavy-tailed GSEMO by a small polynomial factor in $k$. Via an experimental analysis, we show that these asymptotic differences are visible already for small problem sizes: A factor-$5$ speed-up from heavy-tailed mutation and a factor-$10$ speed-up from stagnation detection can be observed already for jump size~$4$ and problem sizes between $10$ and $50$. Overall, our results show that the ideas recently developed to aid single-objective evolutionary algorithms to cope with local optima can be effectively employed also in multiobjective optimization.
翻译:摘要:多目标进化算法(MOEAs)的理论研究远远落后于其在实践中的成功。特别是,以前的理论工作主要考虑由单峰目标组成的简单问题。作为更深入理解进化算法如何解决多峰多目标问题的第一步,我们提出了OJZJ问题,这是一个由两个目标组成的双目标问题,两个目标同经典跳函数基准测试等价。我们证明了具有概率一的SEMO无论运行时间如何都无法计算出完整的 Pareto 前沿。相比之下,对于所有问题规模$n$和所有跳跃大小$k\in [4..\frac n2 - 1]$,全局SEMO(GSEMO)用$\Theta((n-2k)n^{k})$次期望迭代次数覆盖了 Pareto 前沿。对于$k=o(n)$,我们还展示了更紧密的界限$\frac 32 e n^{k+1} \pm o(n^{k+1})$,这可能是第一个除了低阶项之外的紧密的 MOEA 运行时间界限。我们还将GSEMO与两种在单峰多目标问题中表现出优势的方法相结合。当使用带有重尾变异算子的 GSEMO 时,期望运行时间提高了至少$k^{\Omega(k)}$个因子。当将Rajabi和Witt(2022年)的最近停滞检测策略应用于 GSEMO 时,期望运行时间也会提高至少$k^{\Omega(k)}$倍,并且在$k$的一个小多项式因子方面超过了重尾GSEMO。通过实验分析,我们表明这些渐近差异在小规模问题上已经有所显现:在跳跃大小为$4$和问题规模在$10$到$50$之间时,重尾变异的速度提高了$5$倍,而停滞检测的速度提高了$10$倍。总体而言,我们的结果表明,最近为帮助单峰目标进化算法应对局部最优解而开发的思路也可有效地应用于多目标优化。