Black box optimization requires specifying a search space to explore for solutions, e.g. a d-dimensional compact space, and this choice is critical for getting the best results at a reasonable budget. Unfortunately, determining a high quality search space can be challenging in many applications. For example, when tuning hyperparameters for machine learning pipelines on a new problem given a limited budget, one must strike a balance between excluding potentially promising regions and keeping the search space small enough to be tractable. The goal of this work is to motivate -- through example applications in tuning deep neural networks -- the problem of predicting the quality of search spaces conditioned on budgets, as well as to provide a simple scoring method based on a utility function applied to a probabilistic response surface model, similar to Bayesian optimization. We show that the method we present can compute meaningful budget-conditional scores in a variety of situations. We also provide experimental evidence that accurate scores can be useful in constructing and pruning search spaces. Ultimately, we believe scoring search spaces should become standard practice in the experimental workflow for deep learning.
翻译:黑盒优化需要指定一个搜索空间,以探索解决方案,例如二维紧凑空间,而这一选择对于在合理预算下取得最佳结果至关重要。 不幸的是,确定高质量的搜索空间在许多应用中可能具有挑战性。例如,在预算有限的情况下,对机械学习管道的超参数进行调整,对一个新问题进行新问题的处理时,必须兼顾排除潜在有希望的区域,并使搜索空间保持足够小的可移动性。这项工作的目标是通过在调整深神经网络方面的应用,激发预测以预算为条件的搜索空间的质量的问题,以及提供一种简单的评分方法,其依据是适用于概率反应表模型的实用功能,类似于巴伊斯优化。我们提出的方法可以计算各种情况下有意义的预算条件分数。我们还提供实验性证据,表明准确的分数可用于构建和运行搜索空间。最后,我们认为,评分空间应当成为深层学习实验工作流程的标准做法。