Vector norms play a fundamental role in computer science and optimization, so there is an ongoing effort to generalize existing algorithms to settings beyond $\ell_\infty$ and $\ell_p$ norms. We show that many online and bandit applications for general norms admit good algorithms as long as the norm can be approximated by a function that is ``gradient-stable'', a notion that we introduce. Roughly it says that the gradient of the function should not drastically decrease (multiplicatively) in any component as we increase the input vector. We prove that several families of norms, including all monotone symmetric norms, admit a gradient-stable approximation, giving us the first online and bandit algorithms for these norm families. In particular, our notion of gradient-stability gives $O\big(\log^2 (\text{dimension})\big)$-competitive algorithms for the symmetric norm generalizations of Online Generalized Load Balancing and Bandits with Knapsacks. Our techniques extend to applications beyond symmetric norms as well, e.g., to Online Vector Scheduling and to Online Generalized Assignment with Convex Costs. Some key properties underlying our applications that are implied by gradient-stable approximations are a ``smooth game inequality'' and an approximate converse to Jensen's inequality.
翻译:矢量规范在计算机科学和优化中发挥着根本作用, 因此正在不断努力将现有的算法推广到 $\ ell ⁇ infty$ 和 $\ ell_ p$ 规范以外的设置。 我们显示, 许多用于一般规范的在线和土匪应用都接受良好的算法, 只要规范可以被我们引入的“ 梯度- 稳定” 概念所近似。 粗略地说, 当我们增加输入矢量时, 该函数的梯度不应在任何组件中大幅降低( 倍增) 。 我们证明, 包括所有单数对称规范在内的若干规范家庭, 接受梯度表近似, 给我们这些规范的家庭提供第一个在线和土匪算法的算法。 特别是, 我们的梯度概念给出了 $\ big (\ log_ 2) (\ text{dimension}\ big), 这个概念是我们引入的。 这个功能的梯度的梯度 梯度常态通性常态通性通性通则和Knappsacks。 我们的技术应用超越了对正数规范, 和直径直径Cal- dreal- dalal- dislateal- taviews