In the framework of online convex optimization, most iterative algorithms require the computation of projections onto convex sets, which can be computationally expensive. To tackle this problem HK12 proposed the study of projection-free methods that replace projections with less expensive computations. The most common approach is based on the Frank-Wolfe method, that uses linear optimization computation in lieu of projections. Recent work by GK22 gave sublinear adaptive regret guarantees with projection free algorithms based on the Frank Wolfe approach. In this work we give projection-free algorithms that are based on a different technique, inspired by Mhammedi22, that replaces projections by set-membership computations. We propose a simple lazy gradient-based algorithm with a Minkowski regularization that attains near-optimal adaptive regret bounds. For general convex loss functions we improve previous adaptive regret bounds from $O(T^{3/4})$ to $O(\sqrt{T})$, and further to tight interval dependent bound $\tilde{O}(\sqrt{I})$ where $I$ denotes the interval length. For strongly convex functions we obtain the first poly-logarithmic adaptive regret bounds using a projection-free algorithm.
翻译:在在线 convex 优化框架内,大多数迭代算法都需要计算对 convex 的预测,这些预测可以计算成本。 解决这个问题, HK12 提议研究无投射方法,以较低成本的计算取代预测。 最常见的方法是以Frank- Wolfe 法为基础,使用线性优化计算来代替预测。 GK22 最近的工作以基于Frank Wolfe 法的投射自由算法提供亚线性适应遗憾保证。 在这项工作中,我们提供基于由Mhammedi22 所启发的不同技术的无投射算法,以固定会员计算取代预测。 我们提出一种简单的懒惰性梯度算法,以Minkowski 的正规化方法实现接近最佳的适应后悔。 对于一般的 convex 损失函数,我们改进了以前的适应后悔约束,从$(T ⁇ 3/4}) 美元到$O(sqrt{T} 美元,, 进一步提供更近距离依附于 $\tilde{O} (Sqrt{I} $, 美元, 用来取代定数计算。 我们建议, 最先使用磁感应测算。