Bayesian optimization (BO) is an effective approach to optimize expensive black-box functions, that seeks to trade-off between exploitation (selecting parameters where the maximum is likely) and exploration (selecting parameters where we are uncertain about the objective function). In many real-world situations, direct measurements of the objective function are not possible, and only binary measurements such as success/failure or pairwise comparisons are available. To perform efficient exploration in this setting, we show that it is important for BO algorithms to distinguish between different types of uncertainty: epistemic uncertainty, about the unknown objective function, and aleatoric uncertainty, which comes from noisy observations and cannot be reduced. In effect, only the former is important for efficient exploration. Based on this, we propose several new acquisition functions that outperform state-of-the-art heuristics in binary and preferential BO, while being fast to compute and easy to implement. We then generalize these acquisition rules to batch learning, where multiple queries are performed simultaneously.
翻译:贝叶斯优化( BO) 是优化昂贵黑盒功能的有效方法, 目的是在开发( 选择可能达到最大目标的参数) 和勘探( 选择我们对目标功能不确定的参数) 之间权衡取舍( 选择参数 ) 。 在许多现实世界中, 直接测量目标功能是不可能的, 只有成功/ 失败或对称比较等二进制测量方法才能得到。 为了在这一背景下高效地探索, 我们显示, BO 算法必须区分不同类型的不确定性: 隐含的不确定性, 未知的目标函数的不确定性, 以及无法减少的远程不确定性。 实际上, 只有前者对有效探索很重要。 基于这一点, 我们提议了一些新的获取功能, 超越二进制和优惠的BO 的超常状态, 同时快速地进行计算和易于执行。 我们然后将这些获取规则推广到批量学习, 在同时进行多个查询的地方进行 。