Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization. Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint $k$ that achieves a $(1-1/e-\varepsilon)$-approximation in expectation. This algorithm runs in $O(\log(n))$ adaptive rounds and makes $O(n)$ calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. Last, we extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.
翻译:----
子模优化是组合优化中的许多经典问题的推广,并且最近在机器学习(例如,特征工程和主动学习)中找到了广泛的应用。对于许多大规模的优化问题,我们通常关心算法的适应性复杂度,它衡量连续回合的数量,在其中可以并行执行多项独立函数评估的数量是多项式的。虽然低适应性是理想的,但它不足以使分布式算法变得高效,因为在许多子模优化的实际应用中,函数评估的数量变得极为昂贵。受这些应用的启发,我们研究了自适应子模优化的适应性和查询复杂度。我们的主要结果是一种分布式算法,用于最大化基数约束$k$下的单调子模函数,期望达到$(1-1/e-\varepsilon)$的近似度。此算法在$O(\log(n))$适应性回合中运行,并且平均需要$O(n)$次函数评估。近似保证和查询复杂度是最优的,并且适应性近似最优。此外,查询次数大大少于以前的工作。最后,我们将我们的结果扩展到子模覆盖问题,以展示我们的算法和技术的普适性。