We distinguish between two sources of uncertainty in experimental causal inference: design uncertainty, due to the treatment assignment mechanism, and sampling uncertainty, when the sample is drawn from a super-population. This distinction matters in settings with small fixed samples and heterogeneous treatment effects, as in geographical experiments. Most bootstrap procedures used by practitioners primarily estimate sampling uncertainty. Other methods for quantifying design uncertainty also fall short, because they are restricted to common designs and estimators, whereas non-standard designs and estimators are often used in these low-power regimes. We address this gap by proposing an integer programming approach, which allows us to estimate design uncertainty for any known and probabilistic assignment mechanisms, and linear-in-treatment and quadratic-in-treatment estimators. We include asymptotic validity results and demonstrate the refined confidence intervals achieved by accurately accounting for non-standard design uncertainty through simulations of geographical experiments.
翻译:暂无翻译