The performance of any Machine Learning (ML) algorithm is impacted by the choice of its hyperparameters. As training and evaluating a ML algorithm is usually expensive, the hyperparameter optimization (HPO) method needs to be computationally efficient to be useful in practice. Most of the existing approaches on multi-objective HPO use evolutionary strategies and metamodel-based optimization. However, few methods have been developed to account for uncertainty in the performance measurements. This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of ML algorithms. We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise. Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR, achieved with respect to the hypervolume indicator.
翻译:任何机器学习(ML)算法的性能都受到其超参数选择的影响。由于训练和评价ML算法通常费用昂贵,超参数优化方法必须具有计算效率,才能在实践中发挥作用。关于多目标HPO的现有方法大多采用进化战略和基于元模优化。然而,没有制定多少方法来说明性能测量的不确定性。本文件介绍了多目标超参数优化的结果,同时对ML算法的评估也存在不确定性。我们把树结构型Parzenimator(TPE)的抽样战略与在用不同噪音培训高斯进程回归(GPR)后获得的元模型结合起来。三种分析测试功能的实验结果和三个ML问题显示了在超容量指标方面比多目标TPE和GPR取得的改进。