In this work, we propose a computationally efficient algorithm for the problem of global optimization in univariate loss functions. For the performance evaluation, we study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function. Although our approach has similar regret results with the traditional lower-bounding algorithms such as the Piyavskii-Shubert method for the Lipschitz continuous or Lipschitz smooth functions, it has a major computational cost advantage. In Piyavskii-Shubert method, for certain types of functions, the query points may be hard to determine (as they are solutions to additional optimization problems). However, this issue is circumvented in our binary sampling approach, where the sampling set is predetermined irrespective of the function characteristics. For a search space of $[0,1]$, our approach has at most $L\log (3T)$ and $2.25H$ regret for $L$-Lipschitz continuous and $H$-Lipschitz smooth functions respectively. We also analytically extend our results for a broader class of functions that covers more complex regularity conditions.
翻译:在这项工作中,我们提议了一种计算效率高的计算算法,以解决单项损失功能方面全球优化的问题。在绩效评估中,我们研究算法的累积遗憾,而不是我们最佳查询和客观功能最佳价值之间的简单遗憾。虽然我们的方法与传统的较低限制算法,如利普西茨连续功能或利普西茨平稳功能的Piyavskii-Shubert方法,有着类似的遗憾结果,但它具有重大的计算成本优势。在Piyavskii-Shubert方法中,对于某些类型的功能,查询点可能难以确定(因为它们是额外优化问题的解决方案 ) 。然而,在我们的二进制抽样方法中,这一问题被绕过,因为这里的抽样组是预先确定的,而不管功能特性如何。对于$[10,1]美元,我们的搜索空间最多为美元(3T美元)和2.25H美元,分别对L$-利普西茨连续功能和$-H$-Lipschitz平级功能产生的主要成本优势。我们还分析扩大了我们的结果,以涵盖更复杂的常规条件的更广泛职能类别。