In this short paper we investigate whether meta-learning techniques can be used to more effectively tune the hyperparameters of machine learning models using successive halving (SH). We propose a novel variant of the SH algorithm (MeSH), that uses meta-regressors to determine which candidate configurations should be eliminated at each round. We apply MeSH to the problem of tuning the hyperparameters of a gradient-boosted decision tree model. By training and tuning our meta-regressors using existing tuning jobs from 95 datasets, we demonstrate that MeSH can often find a superior solution to both SH and random search.
翻译:在这份短文中,我们研究元学习技术是否可以用来更有效地调整机器学习模型的超参数,使用连续减半的方法。我们提出了SH算法的新变方,即使用元反制器确定每轮应消除哪些候选配置。我们将MesH应用于调整梯度加速决定树模型的超参数的问题。通过使用95个数据集的现有调制工作培训和调整我们的元递制器,我们证明,MesH往往能找到更好的办法,既可以使用SH,也可以随机搜索。