Real-world problems often involve the optimization of several objectives under multiple constraints. An example is the hyper-parameter tuning problem of machine learning algorithms. In particular, the minimization of the estimation of the generalization error of a deep neural network and at the same time the minimization of its prediction time. We may also consider as a constraint that the deep neural network must be implemented in a chip with an area below some size. Here, both the objectives and the constraint are black boxes, i.e., functions whose analytical expressions are unknown and are expensive to evaluate. Bayesian optimization (BO) methodologies have given state-of-the-art results for the optimization of black-boxes. Nevertheless, most BO methods are sequential and evaluate the objectives and the constraints at just one input location, iteratively. Sometimes, however, we may have resources to evaluate several configurations in parallel. Notwithstanding, no parallel BO method has been proposed to deal with the optimization of multiple objectives under several constraints. If the expensive evaluations can be carried out in parallel (as when a cluster of computers is available), sequential evaluations result in a waste of resources. This article introduces PPESMOC, Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based batch method for the simultaneous optimization of multiple expensive-to-evaluate black-box functions under the presence of several constraints. Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes so as to maximally reduce the entropy of the Pareto set of the optimization problem. We present empirical evidence in the form of synthetic, benchmark and real-world experiments that illustrate the effectiveness of PPESMOC.
翻译:现实世界的问题往往涉及在多种制约下优化若干目标。例如,超参数调整机器学习算法的问题。特别是,尽量减少对深神经网络一般误差的估计,同时尽量减少预测时间。我们还可能认为深神经网络必须在一个芯片中安装,其面积小于一定面积的芯片。这里,目标和制约因素都是黑盒,即分析表达方式未知且评估成本高昂的功能。贝伊斯优化(BO)方法为黑盒优化提供了最新的最新结果。然而,大多数BO方法都是连续的,评估目标及限制仅在一个输入地点,同时最小化。然而,有时我们可能拥有资源来平行地评价若干配置。尽管没有提出平行的BO方法来处理在几个制约下优化多个目标的问题。如果能够同时进行(当有一组计算机时),连续评估的结果是资源浪费。这篇文章将PPESMOMOC目前最先进的实验模型、PESMER-BROFIFROFSOFROFIL ASILILM AS ASTIONATIROGRILM ROF ROD ASULILILIG ASULILILIG RODRIL ASTIL ASULIL ASULIL ASUDRIDRIGMMM ASUD ASUD ASU AS AS AS ROF ROMMMM AS AS AS ROM ROM ROM ROF ROF ROF ROF ROF RODRIGM RODRIGM RODRM ROM ROM RODM ROD RODM ROD ROD ROM RODM ROM ROM ROM ROM RODM RODM ROM ROP RODM ROP RODML RODM RODML RODML RODML RODMMM RODM ROM ROM RODM ROM ROM ROM ROD ROM ROM ROM ROM ROM RODM ROM