Constrained multiobjective optimization has gained much interest in the past few years. However, constrained multiobjective optimization problems (CMOPs) are still unsatisfactorily understood. Consequently, the choice of adequate CMOPs for benchmarking is difficult and lacks a formal background. This paper addresses this issue by exploring CMOPs from a performance space perspective. First, it presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization. This methodology offers a first attempt to simultaneously measure the performance in approximating the Pareto front and constraint satisfaction. Secondly, it proposes an approach to measure the capability of the given optimization problem to differentiate among algorithm performances. Finally, this approach is used to contrast eight frequently used artificial test suites of CMOPs. The experimental results reveal which suites are more efficient in discerning between three well-known multiobjective optimization algorithms. Benchmark designers can use these results to select the most appropriate CMOPs for their needs.
翻译:过去几年来,受限制的多目标优化引起了很大的兴趣,然而,受限制的多目标优化问题仍然没有得到令人满意的理解。因此,为基准制定选择适当的CPOP问题十分困难,缺乏正式的背景。本文件从性能空间的角度探讨CMOP问题,从而解决这一问题。首先,它提出了一种新颖的绩效评估方法,明确为限制的多目标优化而设计。这种方法首次试图同时测量在接近Pareto前端和约束性满意度方面的性能。第二,它提出了一种方法,用以衡量给定的优化问题的能力,以区分算法性能。最后,这种方法被用来对比CMOP的八套经常使用的人工测试套件。实验结果揭示了哪些套件在辨别三种广为人知的多目标优化算法方面更为有效。基准设计者可以使用这些结果选择最适合其需要的CPP。