Big data analytics has opened new avenues in economic research, but the challenge of analyzing datasets with tens of millions of observations is substantial. Conventional econometric methods based on extreme estimators require large amounts of computing resources and memory, which are often not readily available. In this paper, we focus on linear quantile regression applied to ``ultra-large'' datasets, such as U.S. decennial censuses. A fast inference framework is presented, utilizing stochastic sub-gradient descent (S-subGD) updates. The inference procedure handles cross-sectional data sequentially: (i) updating the parameter estimate with each incoming "new observation", (ii) aggregating it as a Polyak-Ruppert average, and (iii) computing a pivotal statistic for inference using only a solution path. The methodology draws from time series regression to create an asymptotically pivotal statistic through random scaling. Our proposed test statistic is calculated in a fully online fashion and critical values are calculated without resampling. We conduct extensive numerical studies to showcase the computational merits of our proposed inference. For inference problems as large as $(n, d) \sim (10^7, 10^3)$, where $n$ is the sample size and $d$ is the number of regressors, our method generates new insights, surpassing current inference methods in computation. Our method specifically reveals trends in the gender gap in the U.S. college wage premium using millions of observations, while controlling over $10^3$ covariates to mitigate confounding effects.
翻译:大数据分析为经济研究开辟了新的途径,但利用数千万次观测分析数据集的挑战是巨大的。基于极端估计的常规经济计量方法需要大量的计算资源和记忆,而这些往往不是现成的。在本文中,我们侧重于适用于“超大”数据集的线性量化回归,如美国十年一度的人口普查。提出了快速推论框架,利用随机的次梯度下降(S-SubGD)更新。推论程序按顺序处理跨部门数据:(一)通过每次收到的“新观察”来更新参数趋势估算,(二)将参数汇总成一个Polyak-Ruppert的平均值,(三)仅使用一个解决方案路径来计算判断“超大”数据集。方法从时间序列回归中得出一个通过随机缩放生成无源关键值统计。我们提议的测试统计以完全在线的方式计算,而关键值则不作重新计算。我们进行广泛的数字研究,以展示目前美元比率的计算法值的计算法的计算法值的计算结果,在美元的数值中,在10美元的计算方法中,将数值的计算方法的数值的数值的数值的数值的数值的数值的数值缩缩增。