Bayesian optimization has become a standard technique for hyperparameter optimization of machine learning algorithms. We consider the setting where previous optimization runs are available, and we wish to transfer their outcomes to a new optimization run and thereby accelerate the search. We develop a new hyperparameter-free ensemble model for Bayesian optimization, based on a linear combination of Gaussian Processes and Agnostic Bayesian Learning of Ensembles. We show that this is a generalization of two existing transfer learning extensions to Bayesian optimization and establish a worst-case bound compared to vanilla Bayesian optimization. Using a large collection of hyperparameter optimization benchmark problems, we demonstrate that our contributions substantially reduce optimization time compared to standard Gaussian process-based Bayesian optimization and improve over the current state-of-the-art for warm-starting Bayesian optimization.
翻译:Bayesian优化已成为机器学习算法超光度优化的标准技术。 我们考虑的是以前优化运行的设置,我们希望将其结果转移到新的优化运行,从而加快搜索。 我们开发了一种新的无超光度全方位优化模式,其基础是高山进程和Agnoctiste Bayesian Aclearness of Embersmbles的线性组合。 我们显示这是将两个现有的转让学习扩展推广推广推广推广到巴耶西亚优化,并建立了与香草巴耶西亚优化相比最差的情况。 我们利用大量超光量超光度优化基准问题,证明我们的贡献大大缩短了与标准高山基于巴伊斯进程的优化相比的优化时间,并比目前热启动巴耶斯优化的状态有所改进。