Assessing the empirical performance of Multi-Objective Evolutionary Algorithms (MOEAs) is vital when we extensively test a set of MOEAs and aim to determine a proper ranking thereof. Multiple performance indicators, e.g., the generational distance and the hypervolume, are frequently applied when reporting the experimental data, where typically the data on each indicator is analyzed independently from other indicators. Such a treatment brings conceptual difficulties in aggregating the result on all performance indicators, and it might fail to discover significant differences among algorithms if the marginal distributions of the performance indicator overlap. Therefore, in this paper, we propose to conduct a multivariate $\mathcal{E}$-test on the joint empirical distribution of performance indicators to detect the potential difference in the data, followed by a post-hoc procedure that utilizes the linear discriminative analysis to determine the superiority between algorithms. This performance analysis's effectiveness is supported by an experimentation conducted on four algorithms, 16 problems, and 6 different numbers of objectives.
翻译:评估多目标进化算术(MOEAs)的经验性业绩至关重要,因为我们要广泛测试一套MOEAs,并力求确定适当的等级。在报告实验数据时,常常使用多种性能指标,例如代间距离和体积超大,通常对每个指标的数据进行独立于其他指标的分析。这种处理在概念上难以将所有业绩指标的结果综合起来,如果业绩指标的边际分布重叠,则可能无法发现各种算法之间的重大差异。因此,在本文件中,我们提议对业绩指标的联合经验性分配进行多变量($\mathcal{E})测试,以发现数据的潜在差异,然后是利用线性歧视分析来确定算法之间的优势的事后程序。这一性能分析的有效性得到对四种算法、16个问题和6个不同目标的实验的支持。