In this work we study the problem of {\em Stochastic Budgeted Multi-round Submodular Maximization} (SBMSm), in which we would like to adaptively maximize the sum over multiple rounds of the value of a monotone and submodular objective function defined on a subset of items, subject to the fact that the values of this function depend on the realization of stochastic events and the number of items that we can select over all rounds is limited by a given budget. This problem extends, and generalizes to multiple round settings, well-studied problems such as (adaptive) influence maximization and stochastic probing. We first show that, if the number of items and stochastic events is somehow bounded, there is a polynomial time dynamic programming algorithm for SBMSm. Then, we provide a simple greedy approximation algorithm for SBMSm, that first non-adaptively allocates the budget to be spent at each round, and then greedily and adaptively maximizes the objective function by using the budget assigned at each round. Such algorithm guarantees a $(1-1/e-\epsilon)$-approximation to the optimal adaptive value. Finally, by introducing a metric called {\em budget-adaptivity gap}, we measure how much an optimal policy for SBMSm, that is adaptive in both the budget allocation and item selection, is better than an optimal partially adaptive policy that, as in our greedy algorithm, determined the budget allocation in advance. We show a tight bound of $e/(e-1)$ on the budget-adaptivity gap, and this result implies that our greedy algorithm guarantees the best approximation among all partially adaptive policies.
翻译:暂无翻译