In this paper, we revisit the constrained and stochastic continuous submodular maximization in both offline and online settings. For each $\gamma$-weakly DR-submodular function $f$, we use the factor-revealing optimization equation to derive an optimal auxiliary function $F$, whose stationary points provide a $(1-e^{-\gamma})$-approximation to the global maximum value (denoted as $OPT$) of problem $\max_{\boldsymbol{x}\in\mathcal{C}}f(\boldsymbol{x})$. Naturally, the projected (mirror) gradient ascent relied on this non-oblivious function achieves $(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon$ after $O(1/\epsilon^{2})$ iterations, beating the traditional $(\frac{\gamma^{2}}{1+\gamma^{2}})$-approximation gradient ascent \citep{hassani2017gradient} for submodular maximization. Similarly, based on $F$, the classical Frank-Wolfe algorithm equipped with variance reduction technique \citep{mokhtari2018conditional} also returns a solution with objective value larger than $(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon$ after $O(1/\epsilon^{3})$ iterations. In the online setting, we first consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious search, achieving a regret of $\sqrt{D}$ (where $D$ is the sum of delays of gradient feedback) against a $(1-e^{-\gamma})$-approximation to the best feasible solution in hindsight. Finally, extensive numerical experiments demonstrate the efficiency of our boosting methods.
翻译:在本文中, 我们重新审视了在离线和在线设置中限制和随机连续的亚模式最大化。 对于每个$gamma$- weakly DR- submodal 函数美元。 对于每个$\ gamma$- weakly DR- submodor 函数美元, 我们使用系数- reveal优化方程式来得出一个最佳的辅助函数$F$, 其固定点提供美元( 1- e ⁇ -\\\\\\\ gamma} $- coppol) 问题的最大值( 美元======boldsyomball{x\\ max\ mathal{C_\\\\\\\\\\\ b\ boldyal- lixlational- listalation a( 1- e- equm) a-\\\\\\\\ galmaxal disal disal disal dealation (1/ disal) lax a mession a ( laxi) a- disal- disqal) a- disl) a- disqlation) maxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx