Automatically optimizing the hyperparameters of Machine Learning algorithms is one of the primary open questions in AI. Existing work in Hyperparameter Optimization (HPO) trains surrogate models for approximating the response surface of hyperparameters as a regression task. In contrast, we hypothesize that the optimal strategy for training surrogates is to preserve the ranks of the performances of hyperparameter configurations as a Learning to Rank problem. As a result, we present a novel method that meta-learns neural network surrogates optimized for ranking the configurations' performances while modeling their uncertainty via ensembling. In a large-scale experimental protocol comprising 12 baselines, 16 HPO search spaces and 86 datasets/tasks, we demonstrate that our method achieves new state-of-the-art results in HPO.
翻译:自动优化机器学习算法的超参数是人工智能中的一个主要问题。超参数优化(HPO)的现有工作通过训练替代模型来近似超参数的响应曲面作为回归任务。相反,我们假设训练替代者的最佳策略是通过学习排名问题,保留超参数配置的性能排名。结果,我们提出了一种元学习神经网络替代者的新方法,通过集成模型来建模排名超参数配置性能的不确定性。在一个包括12个基准、16个HPO搜索空间和86个数据集/任务的大规模实验中,我们证明了我们的方法实现了HPO中的最新成果。