When hyperparameter optimization of a machine learning algorithm is repeated for multiple datasets it is possible to transfer knowledge to an optimization run on a new dataset. We develop a new hyperparameter-free ensemble model for Bayesian optimization that is a generalization of two existing transfer learning extensions to Bayesian optimization and establish a worst-case bound compared to vanilla Bayesian optimization. Using a large collection of hyperparameter optimization benchmark problems, we demonstrate that our contributions substantially reduce optimization time compared to standard Gaussian process-based Bayesian optimization and improve over the current state-of-the-art for transfer hyperparameter optimization.
翻译:当对多个数据集重复超参数优化机器学习算法时,就有可能将知识转移到新数据集的优化运行中。我们为贝叶斯优化开发了一个新的超参数无共通制模型,该模型将现有的两个传输学习扩展推广推广到贝叶斯优化,并建立了与香草贝叶斯优化相比最坏的组合。我们利用大量超参数优化基准问题,证明我们的贡献大大缩短了与标准基于高山程序的巴耶斯优化相比的优化时间,并改进了目前最先进的传输超参数优化技术。