Hyperparameter optimization constitutes a large part of typical modern machine learning workflows. This arises from the fact that machine learning methods and corresponding preprocessing steps often only yield optimal performance when hyperparameters are properly tuned. But in many applications, we are not only interested in optimizing ML pipelines solely for predictive accuracy; additional metrics or constraints must be considered when determining an optimal configuration, resulting in a multi-objective optimization problem. This is often neglected in practice, due to a lack of knowledge and readily available software implementations for multi-objective hyperparameter optimization. In this work, we introduce the reader to the basics of multi- objective hyperparameter optimization and motivate its usefulness in applied ML. Furthermore, we provide an extensive survey of existing optimization strategies, both from the domain of evolutionary algorithms and Bayesian optimization. We illustrate the utility of MOO in several specific ML applications, considering objectives such as operating conditions, prediction time, sparseness, fairness, interpretability and robustness.
翻译:超参数优化是典型现代机器学习工作流程的一大部分,这是因为机器学习方法和相应的预处理步骤往往只有在超参数适当调整时才产生最佳性能。但在许多应用中,我们不仅希望优化ML管道,仅用于预测准确性;在确定最佳配置时必须考虑更多的衡量尺度或制约因素,从而导致多目标优化问题。在实践中,由于缺乏知识和随时可用的多目标超光度优化软件,这一点常常被忽视。在这项工作中,我们向读者介绍多目标超参数优化的基本原理,并激励其在应用ML中的实用性。此外,我们还从演化算法和巴伊西亚优化的角度对现有优化战略进行了广泛调查。我们从操作条件、预测时间、稀少、公平、可解释性和稳健性等目标的角度,展示了MOO在若干具体的ML应用中的实用性。