Gaussian processes are cemented as the model of choice in Bayesian optimization and active learning. Yet, they are severely dependent on cleverly chosen hyperparameters to reach their full potential, and little effort is devoted to finding the right hyperparameters in the literature. We demonstrate the impact of selecting good hyperparameters for GPs and present two acquisition functions that explicitly prioritize this goal. Statistical distance-based Active Learning (SAL) considers the average disagreement among samples from the posterior, as measured by a statistical distance. It is shown to outperform the state-of-the-art in Bayesian active learning on a number of test functions. We then introduce Self-Correcting Bayesian Optimization (SCoreBO), which extends SAL to perform Bayesian optimization and active hyperparameter learning simultaneously. SCoreBO learns the model hyperparameters at improved rates compared to vanilla BO, while outperforming the latest Bayesian optimization methods on traditional benchmarks. Moreover, the importance of self-correction is demonstrated on an array of exotic Bayesian optimization tasks
翻译:高斯过程已经成为贝叶斯优化和主动学习中最受欢迎的模型。然而,它们严重依赖于精心选择的超参数才能发挥其全部潜力,在文献中很少有关注于找到正确的超参数。我们演示了选择良好超参数对GPs模型的影响,并提出了两种显式优先考虑此目标的收获函数。基于统计距离的主动学习(SAL)考虑后验样本之间的平均差异,以统计距离进行测量。在许多测试功能上,SAL被证明优于最新的贝叶斯主动学习算法。然后,我们引入了自我修正的贝叶斯优化(SCoreBO),它将SAL扩展到同时执行贝叶斯优化和主动超参数学习。与Vanilla BO相比,SCoreBO以改善的速度学习模型的超参数,在传统基准测试上比最新的贝叶斯优化方法表现更好。此外,演示了自我纠正在各种奇异的贝叶斯优化任务中的重要性。