Hyperparameter optimization (HPO) is a necessary step to ensure the best possible performance of Machine Learning (ML) algorithms. Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems is vast. Recently, though, algorithms have appeared which focus on optimizing multiple conflicting objectives simultaneously. This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms, distinguishing between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both. We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
翻译:超参数优化(HPO)是确保机器学习算法尽可能最佳性能的必要步骤。 已经开发了几种方法来实施HPO;这些方法大多侧重于优化一种业绩计量(通常是基于错误的计量 ), 而关于这种单一目标的HPO问题的文献非常丰富。 最近出现了侧重于同时优化多重冲突目标的算法。 文章对2014年至2020年期间出版的关于多目标HPO算法的文献进行了系统调查,对基于计量经济学的算法、基于元模型的算法和使用两者混合的方法作了区分。 我们还讨论了用于比较多目标的HPO程序以及当前研究方向的质量衡量标准。