Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function. However, they usually require several approximations or simplifying assumptions (without clearly understanding their effects on the BO performance) and/or their generalization to batch BO is computationally unwieldy, especially with an increasing batch size. To alleviate these issues, this paper presents a novel trusted-maximizers entropy search (TES) acquisition function: It measures how much an input query contributes to the information gain on the maximizer over a finite set of trusted maximizers, i.e., inputs optimizing functions that are sampled from the Gaussian process posterior belief of the objective function. Evaluating TES requires either only a stochastic approximation with sampling or a deterministic approximation with expectation propagation, both of which are investigated and empirically evaluated using synthetic benchmark objective functions and real-world optimization problems, e.g., hyperparameter tuning of a convolutional neural network and synthesizing 'physically realizable' faces to fool a black-box face recognition system. Though TES can naturally be generalized to a batch variant with either approximation, the latter is amenable to be scaled to a much larger batch size in our experiments.
翻译:以信息为基础的巴伊西亚优化(BO)算法在优化黑箱目标功能方面实现了最先进的业绩,然而,这些算法通常需要几种近似或简化假设(没有清楚地了解其对BO绩效的影响)和/或对批发BO的概括化在计算上是不易操作的,特别是在批量规模增加的情况下。为缓解这些问题,本文件展示了一种新型的可信赖的、最可靠和最强的搜索(TES)获取功能:它衡量一个输入查询在一定的可信任最大化器(即投入优化从高斯进程对目标功能的后端信仰中抽取的功能)的最大化信息收益方面贡献了多少。评价TES需要的只是与抽样相比的随机近似或与预期传播的确定性近似,两者都是利用合成基准目标功能和实际优化问题进行调查和实证评估的,例如,对同级星神经网络进行超度调整,以及将物理上可变的面面合成黑箱面合成的图像与对目标功能的后端认知度认知系统之间,可以大幅升级。