Running machine learning algorithms on large and rapidly growing volumes of data is often computationally expensive, one common trick to reduce the size of a data set, and thus reduce the computational cost of machine learning algorithms, is \emph{probability sampling}. It creates a sampled data set by including each data point from the original data set with a known probability. Although the benefit of running machine learning algorithms on the reduced data set is obvious, one major concern is that the performance of the solution obtained from samples might be much worse than that of the optimal solution when using the full data set. In this paper, we examine the performance loss caused by probability sampling in the context of adaptive submodular maximization. We consider a simple probability sampling method which selects each data point with probability $r\in[0,1]$. If we set the sampling rate $r=1$, our problem reduces to finding a solution based on the original full data set. We define sampling gap as the largest ratio between the optimal solution obtained from the full data set and the optimal solution obtained from the samples, over independence systems. %It captures the performance loss of the optimal solution caused by the probability sampling. Our main contribution is to show that if the utility function is policywise submodular, then for a given sampling rate $r$, the sampling gap is both upper bounded and lower bounded by $1/r$. One immediate implication of our result is that if we can find an $\alpha$-approximation solution based on a sampled data set (which is sampled at sampling rate $r$), then this solution achieves an $\alpha r$ approximation ratio against the optimal solution when using the full data set.
翻译:在大型和快速增长的数据量上运行机器学习算法往往计算费用昂贵,减少数据集规模,从而降低机器学习算法计算成本的常见技巧是 emph{概率抽样}。它通过将原始数据集中的每个数据点纳入已知概率的原始数据集,创建了抽样数据集。虽然在减少的数据集上运行机器学习算法的好处是显而易见的,但一个主要关切是,在使用完整数据集时,从抽样中获得的解决方案的性能可能比最佳解决方案的性能差得多。在本文件中,我们研究了在适应性亚模块最大化背景下取样概率造成的性能损失,从而降低了机器学习算法的概率。我们考虑一种简单的概率抽样方法,选择每个数据点的概率为$[0,1,1美元。如果我们设定取样率为美元,我们的问题就会降低到基于原始完整数据集的解决方案。我们将抽样差距定义为从完整数据集中获得的最佳解决方案和从样品中获得的最佳解决方案之间的最大比率。在独立系统上,我们用最精确的基价比值的基价损失,如果我们通过最精确的基价的基价计算结果,那么这个基价的基底的基值的基价的基值的基值是比值的基价的基价的基值,那么,如果我们的基底值的基值的基值的基值的基值的基值的基值的基值的基值的基值的底值的基值的基值的基值的底值的底值是比值的基值的基值的基值的基值的基值的基值的基值是比值,如果我们为比值的基值的底值的基值的基值的基值的基值是比值的基值的基值的基值是比值的基值是比值是比值是比值的比值的基值的底值,如果我们的比值是比值是比值为比值的比值的比值的比值的比值的比值的基值的比值的基值的基值的基值的基值的基值的基值的基值的比值的基值的比值的基值的基值的基值的基值的基值的比值的比值的比值的比值的比值的比值的比值的比值