Subsampling is a general statistical method developed in the 1990s aimed at estimating the sampling distribution of a statistic $\hat \theta _n$ in order to conduct nonparametric inference such as the construction of confidence intervals and hypothesis tests. Subsampling has seen a resurgence in the Big Data era where the standard, full-resample size bootstrap can be infeasible to compute. Nevertheless, even choosing a single random subsample of size $b$ can be computationally challenging with both $b$ and the sample size $n$ being very large. In the paper at hand, we show how a set of appropriately chosen, non-random subsamples can be used to conduct effective -- and computationally feasible -- distribution estimation via subsampling. Further, we show how the same set of subsamples can be used to yield a procedure for subsampling aggregation -- also known as subagging -- that is scalable with big data. Interestingly, the scalable subagging estimator can be tuned to have the same (or better) rate of convergence as compared to $\hat \theta _n$. The paper is concluded by showing how to conduct inference, e.g., confidence intervals, based on the scalable subagging estimator instead of the original $\hat \theta _n$.
翻译:子抽样是1990年代开发的一种一般性统计方法,旨在估计一个统计的抽样分布情况,即$\hhat\theta_n$,以进行非参数性推断,例如构建信任间隔和假设测试。在大数据时代,在标准、全反射大小的靴子陷阱可能无法进行计算的情况下,子抽样看到重新出现。尽管如此,即使选择一个规模为$b$的随机子样本,也可以在计算上具有挑战性,因为美元和样本规模为$非常大。在手头的纸张中,我们展示了如何使用一套适当选择的非随机子样本来进行有效的 -- -- 和计算上可行的 -- -- 通过子抽样抽样进行分配估计。此外,我们展示了如何使用同一组子样本来产生一个与大数据相适应的子抽样程序。有趣的是,可测量的子缩放比例可以调整为相同的(或更精确)美元。在原始的纸张排序中,以美元为折叠的缩放比例,以美元为折叠。