In this paper, we study stochastic submodular maximization problems with general matroid constraints, that naturally arise in online learning, team formation, facility location, influence maximization, active learning and sensing objective functions. In other words, we focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution. We show that for monotone functions of this form, the stochastic continuous greedy algorithm attains an approximation ratio (in expectation) arbitrarily close to $(1-1/e) \approx 63\%$ using a polynomial estimation of the gradient. We argue that using this polynomial estimator instead of the prior art that uses sampling eliminates a source of randomness and experimentally reduces execution time.
翻译:在这篇论文中,我们研究带有一般秩约束的随机子模最大化问题,这种问题在在线学习、团队组建、设施位置、影响最大化、主动学习和感知目标函数中自然地出现。换句话说,我们着重于最大化子模函数,这些函数是通过一类带有未知分布的子模函数的期望来定义的。我们展示了对于这种形式的单调函数,使用多项式梯度估计器的随机连续贪婪算法可以获得一个(期望中)接近于$(1-1/e)\approx 63\%$的近似比。我们认为,使用这个多项式估计器代替使用采样的先前技术消除了一种随机性,并在实验中减少了执行时间。