While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO's standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67x faster than state-of-the-art methods and 10,000x faster than random search on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.
翻译:虽然Bayesian Optimination(BO)是优化昂贵黑盒功能的一种非常流行的方法,但它未能利用域专家的经验。 这导致BO浪费专家已经知道的对设计选择不当的功能评估(例如机器学习超参数), 专家已经知道这些选择效果不佳。 为了解决这个问题, 我们引入Bayesian Optimination( BOB), 使用最佳功能( BOPrO) 预选前缀, 引入Bayesian Optimination( BOPrO) 。 BOPrO 允许用户将其知识注入优化进程, 其形式是先行, 其投入空间的某些部分将产生最佳性能, 而不是BO的标准前期功能, 后者对用户来说不那么直观。 BOPRO 将这些前期的功能和 BOP 标准概率模型结合起来, 形成一个用来选择下一个点的假的假的假假的假的假的。 我们显示, BOPrO 大约比最先进的方法快6.67x速度, 并且比随机搜索共同的一套基准, 并实现新的、甚至更先进的在现实世界最精确的硬件设计中实现最佳的回收。