Learning decompositions of expensive-to-evaluate black-box functions promises to scale Bayesian optimisation (BO) to high-dimensional problems. However, the success of these techniques depends on finding proper decompositions that accurately represent the black-box. While previous works learn those decompositions based on data, we investigate data-independent decomposition sampling rules in this paper. We find that data-driven learners of decompositions can be easily misled towards local decompositions that do not hold globally across the search space. Then, we formally show that a random tree-based decomposition sampler exhibits favourable theoretical guarantees that effectively trade off maximal information gain and functional mismatch between the actual black-box and its surrogate as provided by the decomposition. Those results motivate the development of the random decomposition upper-confidence bound algorithm (RDUCB) that is straightforward to implement - (almost) plug-and-play - and, surprisingly, yields significant empirical gains compared to the previous state-of-the-art on a comprehensive set of benchmarks. We also confirm the plug-and-play nature of our modelling component by integrating our method with HEBO, showing improved practical gains in the highest dimensional tasks from Bayesmark.
翻译:昂贵到评估黑盒功能的学习分解方法, 可能会把贝叶西亚最优化( BO) 扩大到高维问题。 然而, 这些技术的成功取决于能否找到能准确代表黑盒的正确分解方法。 虽然先前的工程学习了基于数据的分解方法, 但我们在本文中调查了数据独立的分解抽样规则。 我们发现分解方法的数据驱动者很容易被误导到不存在于搜索空间的局部分解中。 然后, 我们正式显示, 随机的树基分解采样器展示了有利的理论保证, 有效地交换了最大信息收益和实际黑盒及其代金之间的功能不匹配。 这些结果促使开发随机解析的上层信任约束算法( RDUCB ), 这个算法可以直接实施 — ( 最大部分) 插座和游戏 —, 并且令人惊讶的是, 与以前在一套综合基准方面的状态相比, 取得了重大的经验性收益。 我们还确认, 通过整合我们的标准, 将我们的最高水平的建模工具与我们的最高建模工具结合起来, 展示了我们空间上的最高建模部分的顶点和空间建模成果。