Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation. We compare in this work the results of four different hyperparameter tuning approaches for a family of genetic algorithms on 25 diverse pseudo-Boolean optimization problems. More precisely, we compare previously obtained results from a grid search with those obtained from three automated configuration techniques: iterated racing, mixed-integer parallel efficient global optimization, and mixed-integer evolutionary strategies. Using two different cost metrics, expected running time and the area under the empirical cumulative distribution function curve, we find that in several cases the best configurations with respect to expected running time are obtained when using the area under the empirical cumulative distribution function curve as the cost metric during the configuration process. Our results suggest that even when interested in expected running time performance, it might be preferable to use anytime performance measures for the configuration task. We also observe that tuning for expected running time is much more sensitive with respect to the budget that is allocated to the target algorithms.
翻译:为特定优化问题找到算法的超参数的最佳配置是进化计算中的一项重要任务。我们在此工作中比较了在25种不同的假 Boolean 优化问题上对基因算法系列的四种不同的超参数调制方法的结果。更准确地说,我们比较了先前从电网搜索中获得的结果与从三种自动配置技术(迭代赛跑、混合整数平行有效全球优化和混合整数进化战略)获得的结果。我们发现,在使用两种不同的成本衡量标准、预期运行时间和实验累积分布函数曲线下的区域,在几种情况下,在使用经验累积分布曲线下的区域作为配置过程的成本衡量标准时,在预期累积分布曲线下获得关于预期运行时间的最佳配置。我们的结果表明,即使对预期运行时间的性能感兴趣,也可以在任何时间对配置任务使用业绩计量。我们还注意到,对预期运行时间的调整对于分配给目标算法的预算来说,对预期运行时间的调整要敏感得多。