Bayesian Optimization (BO) is a method for globally optimizing black-box functions. While BO has been successfully applied to many scenarios, developing effective BO algorithms that scale to functions with high-dimensional domains is still a challenge. Optimizing such functions by vanilla BO is extremely time-consuming. Alternative strategies for high-dimensional BO that are based on the idea of embedding the high-dimensional space to the one with low dimension are sensitive to the choice of the embedding dimension, which needs to be pre-specified. We develop a new computationally efficient high-dimensional BO method that exploits variable selection. Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables, without the demand of any pre-specified hyperparameters. We theoretically analyze the computational complexity of our algorithm and derive the regret bound. We empirically show the efficacy of our method on several synthetic and real problems.
翻译:巴伊西亚最佳化(BO)是全球优化黑盒功能的一种方法。 虽然BO已经成功地应用于许多设想方案, 开发有效的BO算法, 将这种算法规模扩大到高维域的功能仍是一个挑战。 由Vanilla BO优化这种功能非常耗时。 基于将高维空间嵌入低维空间的理念的高维BO替代战略对嵌入维度的选择十分敏感, 需要预先指定。 我们开发了一种新的高效的计算高维BO方法, 利用变量选择。 我们的方法可以自动学习轴对齐的子空间, 即含有选定变量的空间, 不需要任何预设的超参数。 我们从理论上分析我们算法的计算复杂性, 并得出遗憾的束缚。 我们用实验方法在几个合成和实际问题上展示了我们的方法的有效性 。