A solution that is only reliable under favourable conditions is hardly a safe solution. Min Max Optimization is an approach that returns optima that are robust against worst case conditions. We propose algorithms that perform Min Max Optimization in a setting where the function that should be optimized is not known a priori and hence has to be learned by experiments. Therefore we extend the Bayesian Optimization setting, which is tailored to maximization problems, to Min Max Optimization problems. While related work extends the two acquisition functions Expected Improvement and Gaussian Process Upper Confidence Bound; we extend the two acquisition functions Entropy Search and Knowledge Gradient. These acquisition functions are able to gain knowledge about the optimum instead of just looking for points that are supposed to be optimal. In our evaluation we show that these acquisition functions allow for better solutions - converging faster to the optimum than the benchmark settings.
翻译:只有在有利条件下才可靠的解决办法很难成为安全的解决办法。 Min Max 优化是一种返回对最坏的个案条件强力的Popima 的方法。 我们建议了一种算法,这种算法可以在一个不先验地知道应优化的功能因而必须通过实验来学习的环境下实现 Min Max 优化。 因此,我们把针对最大化问题的Bayesian 优化设置扩大到Min Max 优化问题。 虽然相关工作扩展了两个获取功能:预期改进和Gaussian 进程最高信任库; 我们扩展了两个获取功能: Entropy搜索和知识渐进。 这些获取功能能够获得关于最佳功能的知识,而不是仅仅寻找本应是最佳的点。 在我们的评估中,我们显示这些获取功能可以找到更好的解决方案 — 与基准设置相比,最佳的组合速度更快。