Automated hyperparameter optimization (HPO) has gained great popularity and is an important ingredient of most automated machine learning frameworks. The process of designing HPO algorithms, however, is still an unsystematic and manual process: Limitations of prior work are identified and the improvements proposed are -- even though guided by expert knowledge -- still somewhat arbitrary. This rarely allows for gaining a holistic understanding of which algorithmic components are driving performance, and carries the risk of overlooking good algorithmic design choices. We present a principled approach to automated benchmark-driven algorithm design applied to multifidelity HPO (MF-HPO): First, we formalize a rich space of MF-HPO candidates that includes, but is not limited to common HPO algorithms, and then present a configurable framework covering this space. To find the best candidate automatically and systematically, we follow a programming-by-optimization approach and search over the space of algorithm candidates via Bayesian optimization. We challenge whether the found design choices are necessary or could be replaced by more naive and simpler ones by performing an ablation analysis. We observe that using a relatively simple configuration, in some ways simpler than established methods, performs very well as long as some critical configuration parameters have the right value.
翻译:自动超光度优化(HPO)已获得极大支持,并且是大多数自动机器学习框架的重要组成部分。但是,设计HPO算法的过程仍是一个非系统和人工的过程:查明了先前工作的局限性,提议的改进仍然有些武断 -- -- 尽管以专家知识为指导 -- -- 这很少允许人们全面了解哪些算法组成部分在推动业绩,并冒着忽视良好算法设计选择的风险。我们提出了一个原则性方法,用于适用于多性化 HPO(MF-HPO)的自动基准驱动算法设计:首先,我们正式确定一个丰富的MF-HPO候选人空间,其中包括但不局限于共同的HPO算法,然后提出一个覆盖这一空间的可配置框架。为了自动和系统地找到最佳候选人,我们采用一种程序化的逐个选择法,并通过Bayesian 优化对算法候选人的空间进行搜索。我们质疑发现的设计选择是否必要,或者可以通过更天真更简单更简单的和更简单的算法来取代。我们观察到,使用相对简单的配置方式比既定的参数简单得多。