This paper studies the parametric bootstrap method for networks to quantify the uncertainty of statistics of interest. While existing network resampling methods primarily focus on count statistics under node-exchangeable (graphon) models, we consider more general network statistics (including local statistics) under the Chung-Lu model without node-exchangeability. We show that the natural network parametric bootstrap that first estimates the network generating model and then draws bootstrap samples from the estimated model generally suffers from bootstrap bias. As a general recipe for addressing this problem, we show that a two-level bootstrap procedure provably reduces the bias. This essentially extends the classical idea of iterative bootstrap to the network case with growing number of parameters. Moreover, for many network statistics, the second-level bootstrap also provides a way to construct confidence intervals with higher accuracy. As a byproduct of our effort to construct confidence intervals, we also prove the asymptotic normality of subgraph counts under the Chung-Lu model.
翻译:暂无翻译