Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm. A state-of-the-art HPO method is Hyperband, which, however, has its own parameters that influence its performance. One of these parameters, the maximal budget, is especially problematic: If chosen too small, the budget needs to be increased in hindsight and, as Hyperband is not incremental by design, the entire algorithm must be re-run. This is not only costly but also comes with a loss of valuable knowledge already accumulated. In this paper, we propose incremental variants of Hyperband that eliminate these drawbacks, and show that these variants satisfy theoretical guarantees qualitatively similar to those for the original Hyperband with the "right" budget. Moreover, we demonstrate their practical utility in experiments with benchmark data sets.
翻译:超参数优化(HPO) 涉及一个参数化机器学习算法(HPC) 最适当的超参数配置的自动搜索。 最先进的 HPO 方法是超频带, 但该方法有自己的参数影响其性能。 其中一个参数, 即最大预算, 特别成问题 : 如果选择太小, 预算需要在后视中增加, 由于超频带不是按设计递增的, 整个算法必须重新运行。 这不仅是昂贵的, 而且还要损失已经积累的宝贵知识。 在本文中, 我们提出了消除这些倒退的超频带递增变量, 并表明这些变量在理论上符合与原始超频带相似的理论保证, 并用“ 右” 预算。 此外, 我们用基准数据集来进行实验时展示了它们的实际效用 。