Study samples often differ from the target populations of inference and policy decisions in non-random ways. Researchers typically believe that such departures from random sampling -- due to changes in the population over time and space, or difficulties in sampling truly randomly -- are small, and their corresponding impact on the inference should be small as well. We might therefore be concerned if the conclusions of our studies are excessively sensitive to a very small proportion of our sample data. We propose a method to assess the sensitivity of applied econometric conclusions to the removal of a small fraction of the sample. Manually checking the influence of all possible small subsets is computationally infeasible, so we use an approximation to find the most influential subset. Our metric, the "Approximate Maximum Influence Perturbation," is based on the classical influence function, and is automatically computable for common methods including (but not limited to) OLS, IV, MLE, GMM, and variational Bayes. We provide finite-sample error bounds on approximation performance. At minimal extra cost, we provide an exact finite-sample lower bound on sensitivity. We find that sensitivity is driven by a signal-to-noise ratio in the inference problem, is not reflected in standard errors, does not disappear asymptotically, and is not due to misspecification. While some empirical applications are robust, results of several influential economics papers can be overturned by removing less than 1% of the sample.
翻译:研究样本往往与非随机方式的推断和政策决定的目标人群不同。研究人员一般认为,随机抽样的这种偏离 -- -- 由时间和空间造成的人口变化或真正随机抽样的困难造成的 -- -- 规模较小,对推断的相应影响也应较小。因此,我们可能担心我们研究的结论是否过分敏感,与我们抽样数据中非常小的一部分相适应。我们建议一种方法,评估应用计量经济学结论对去除一小部分抽样的敏感度。人工检查所有可能的小型子集的影响是计算不可行的,因此我们使用近似法来找到最有影响力的子集。我们的衡量标准,即“可能的最大影响”基于传统影响功能,并且自动地对共同方法(但不限于)OLS、IV、MLE、GM和变异湾进行兼容。我们提供有限的计量结论,但近似性业绩的误差是有限的。我们提供精确的定值缩缩缩缩略图,因此我们使用近似法来找到最有影响力的子集。我们发现,“最接近度的误差”的度是典型的缩略度比,我们发现,一个敏感度不是根据一种标准的信号,感力,而不是根据一种误差的误差的。我们发现,对准的误差是比。我们认为,一种误差是比比比的。</s>