Empirical regression discontinuity (RD) studies often use covariates to increase the precision of their estimates. In this paper, we propose a novel class of estimators that use such covariate information more efficiently than the linear adjustment estimators that are currently used widely in practice. Our approach can accommodate a possibly large number of either discrete or continuous covariates. It involves running a standard RD analysis with an appropriately modified outcome variable, which takes the form of the difference between the original outcome and a function of the covariates. We characterize the function that leads to the estimator with the smallest asymptotic variance, and show how it can be estimated via modern machine learning, nonparametric regression, or classical parametric methods. The resulting estimator is easy to implement, as tuning parameters can be chosen as in a conventional RD analysis. An extensive simulation study illustrates the performance of our approach.
翻译:经验回归不连续( RD) 研究经常使用共变法来提高估计值的精确度。 在本文中,我们建议使用比目前实践中广泛使用的线性调整估计值更高效的新型测算器类别,使用这种共变法信息的效率要高于目前实践中广泛使用的线性调整估计值。我们的方法可以容纳大量离散或连续的共变法。它涉及运行标准的RD分析,同时采用适当修改的结果变量,其形式为原始结果与共变法函数之间的差异。我们描述导致估算值的函数,其大小与最小的无特征差异相同,并表明如何通过现代机器学习、非对称回归法或典型的参数方法来估计。由此得出的估计值很容易执行,因为调整参数可以像常规RD分析那样选择。一个广泛的模拟研究展示了我们方法的绩效。