Bayesian Optimization is the state of the art technique for the optimization of black boxes, i.e., functions where we do not have access to their analytical expression nor its gradients, they are expensive to evaluate and its evaluation is noisy. The most popular application of bayesian optimization is the automatic hyperparameter tuning of machine learning algorithms, where we obtain the best configuration of machine learning algorithms by optimizing the estimation of the generalization error of these algorithms. Despite being applied with success, bayesian optimization methodologies also have hyperparameters that need to be configured such as the probabilistic surrogate model or the acquisition function used. A bad decision over the configuration of these hyperparameters implies obtaining bad quality results. Typically, these hyperparameters are tuned by making assumptions of the objective function that we want to evaluate but there are scenarios where we do not have any prior information about the objective function. In this paper, we propose a first attempt over automatic bayesian optimization by exploring several heuristics that automatically tune the acquisition function of bayesian optimization. We illustrate the effectiveness of these heurisitcs in a set of benchmark problems and a hyperparameter tuning problem of a machine learning algorithm.
翻译:优化 Bayesian 优化是优化黑盒最先进技术的状态, 也就是说, 无法获取分析表达或梯度的功能, 评估成本昂贵, 评估费用昂贵, 评估非常吵闹。 最流行的Bayesian优化应用是机器学习算法的自动超参数调整, 我们通过优化这些算法的一般性错误的估算, 获得了机器学习算法的最佳配置。 尽管正在成功应用, 刺线优化方法也有需要配置的超常参数, 如概率代金模型或所使用的获取功能。 对这些超常参数配置的错误决定意味着获得质量差的结果。 典型地说, 这些超常参数是通过假设我们想要评估的客观功能来调整的, 但有些假设是, 我们没有关于这些算法的预设信息。 在本文中, 我们提出对自动的刺线优化进行首次尝试, 探索一些自动调节刺线优化的获取功能的超常识理学。 我们展示了这些超常识问题在一套超常识数模型中学习的超常识问题。