We consider maximization of stochastic monotone continuous submodular functions (CSF) with a diminishing return property. Existing algorithms only guarantee the performance \textit{in expectation}, and do not bound the probability of getting a bad solution. This implies that for a particular run of the algorithms, the solution may be much worse than the provided guarantee in expectation. In this paper, we first empirically verify that this is indeed the case. Then, we provide the first \textit{high-probability} analysis of the existing methods for stochastic CSF maximization, namely PGA, boosted PGA, SCG, and SCG++. Finally, we provide an improved high-probability bound for SCG, under slightly stronger assumptions, with a better convergence rate than that of the expected solution. Through extensive experiments on non-concave quadratic programming (NQP) and optimal budget allocation, we confirm the validity of our bounds and show that even in the worst-case, PGA converges to $OPT/2$, and boosted PGA, SCG, SCG++ converge to $(1 - 1/e)OPT$, but at a slower rate than that of the expected solution.
翻译:我们考虑最大化具有递减回报特性的随机单调连续次模函数(CSF)。现有算法仅在期望下保证性能,而无法限制获得糟糕解的概率。这意味着对于算法的特定运行,解决方案可能比期望的保证要差得多。本文首先进行了实证验证,证明了这确实是这种情况。然后,我们提供了对于多种随机CSF最大化现有方法的第一个\ 高概率分析,即PGA、boosted PGA、SCG和SCG++。最后,在略微更强的假设下,我们为SCG提供了改进的高概率边界,具有比期望解更好的收敛速度。通过对非凸二次规划(NQP)和最佳预算分配的广泛实验,我们证实了我们的边界的有效性,并表明即使在最坏情况下,PGA也会收敛到$OPT/2$,而boosted PGA、SCG和SCG++收敛到$(1-1/e)OPT$,但速度比期望解更慢。