Bayesian optimization (BO), while proved highly effective for many black-box function optimization tasks, requires practitioners to carefully select priors that well model their functions of interest. Rather than specifying by hand, researchers have investigated transfer learning based methods to automatically learn the priors, e.g. multi-task BO (Swersky et al., 2013), few-shot BO (Wistuba and Grabocka, 2021) and HyperBO (Wang et al., 2022). However, those prior learning methods typically assume that the input domains are the same for all tasks, weakening their ability to use observations on functions with different domains or generalize the learned priors to BO on different search spaces. In this work, we present HyperBO+: a pre-training approach for hierarchical Gaussian processes that enables the same prior to work universally for Bayesian optimization on functions with different domains. We propose a two-step pre-training method and analyze its appealing asymptotic properties and benefits to BO both theoretically and empirically. On real-world hyperparameter tuning tasks that involve multiple search spaces, we demonstrate that HyperBO+ is able to generalize to unseen search spaces and achieves lower regrets than competitive baselines.
翻译:Bayesian优化(BO)虽然在很多黑盒功能优化任务中证明非常有效,但要求实践者仔细选择能够模拟其感兴趣的功能的预选。研究人员不是用手来具体说明,而是调查基于转移的学习方法,以便自动学习前科,例如多塔斯克BO(Swersky等人,2013年)、微小的BO(Wistuba和Grabocka,2021年)和HyperBO(Wang等人,2022年)。然而,这些先前的学习方法通常假定,输入领域对所有任务都是一样的,削弱了他们使用不同领域功能观测的能力,或者一般化了在BO的不同搜索空间上学到的先行经验。在这项工作中,我们介绍了HyperBO+:对等级高山进程的培训前期方法,该方法使得以前能够对不同领域的Bayesian优化功能普遍进行同样的工作。我们提出了一个两步前培训前方法,并从理论上和实验性地分析其吸引力和对BO的抽象性质和好处。在现实世界超标准调整任务中涉及多个搜索空间,我们证明,SybO+能够普遍地实现较低的搜索基线。