The performance of acquisition functions for Bayesian optimisation to locate the global optimum of continuous functions is investigated in terms of the Pareto front between exploration and exploitation. We show that Expected Improvement (EI) and the Upper Confidence Bound (UCB) always select solutions to be expensively evaluated on the Pareto front, but Probability of Improvement is not guaranteed to do so and Weighted Expected Improvement does so only for a restricted range of weights. We introduce two novel $\epsilon$-greedy acquisition functions. Extensive empirical evaluation of these together with random search, purely exploratory, and purely exploitative search on 10 benchmark problems in 1 to 10 dimensions shows that $\epsilon$-greedy algorithms are generally at least as effective as conventional acquisition functions (e.g., EI and UCB), particularly with a limited budget. In higher dimensions $\epsilon$-greedy approaches are shown to have improved performance over conventional approaches. These results are borne out on a real world computational fluid dynamics optimisation problem and a robotics active learning problem. Our analysis and experiments suggest that the most effective strategy, particularly in higher dimensions, is to be mostly greedy, occasionally selecting a random exploratory solution.
翻译:在勘探和开发之间,从Pareto前方的随机搜索、纯探索和纯粹的剥削性搜索来看,对Bayesian优化的获取功能的履行情况进行了调查。我们表明,预期改进(EI)和最高信任度(UCB)总是选择在Pareto前方进行昂贵评估的解决方案,但改进的可能性并不保证能够如此,而且WeightSilon-Serview改进只是针对有限范围的权重。我们引入了两种新型的 $\epsilon$-greedy的获取功能。对这些功能的广泛经验评估,同时对1至10个层面的10个基准问题进行了随机搜索、纯探索和纯粹的剥削性搜索。我们表明,在1至10个层面,对10个基准问题的随机搜索、纯探索性搜索和纯探索性搜索表明,美元-greedy 算算法通常至少与常规的获取功能(如E.I和UCB)一样有效,特别是在有限的预算范围内。在较高层面,我们的分析与实验中选择最有效的探索性战略的方法有时是随机性选择最有效的策略,特别是在探索性层面。我们的分析与探索性的方法。