While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO's standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67x faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.
翻译:虽然Bayesian Optimination(BO)是优化昂贵黑盒功能的一个非常流行的方法,但它未能利用域专家的经验。 这导致BO浪费专家已经知道的对设计选择不当的功能评估(例如机器学习超参数), 专家已经知道这些选择效果不佳。 为了解决这个问题, 我们引入Bayesian Optiminal( BOPRO), 使用最佳功能前缀( BOPrO) 。 BOPrO 允许用户将其知识注入优化进程, 其形式是先验, 其输入部分输入空间将产生最佳性能, 而不是BO的标准前验功能, 后者对用户来说远非直观性。 BOPrO 然后将这些前缀与BO的标准概率模型结合起来, 形成一个用来选择下一个点的假隐隐隐隐。 我们显示, BOPrO 在共同的成套基准中比最新技术方法快6. 67x快, 并在真实的硬件设计应用程序上实现新的最新状态性业绩, 如果BOPO 之前的准确性恢复速度, 我们也显示, 更精确地恢复了BOPO 。