The regression discontinuity (RD) design is widely used for program evaluation with observational data. The RD design enables the identification of the local average treatment effect (LATE) at the treatment cutoff by exploiting known deterministic treatment assignment mechanisms. The primary focus of the existing literature has been the development of rigorous estimation methods for the LATE. In contrast, we consider policy learning under the RD design. We develop a robust optimization approach to finding an optimal treatment cutoff that improves upon the existing one. Under the RD design, policy learning requires extrapolation. We address this problem by partially identifying the conditional expectation function of counterfactual outcome under a smoothness assumption commonly used for the estimation of LATE. We then minimize the worst case regret relative to the status quo policy. The resulting new treatment cutoffs have a safety guarantee, enabling policy makers to limit the probability that they yield a worse outcome than the existing cutoff. Going beyond the standard single-cutoff case, we generalize the proposed methodology to the multi-cutoff RD design by developing a doubly robust estimator. We establish the asymptotic regret bounds for the learned policy using semi-parametric efficiency theory. Finally, we apply the proposed methodology to empirical and simulated data sets.
翻译:回归不连续(RD)设计被广泛用于用观测数据进行方案评价。RD设计通过利用已知的确定性治疗分配机制,能够确定治疗截止点的当地平均治疗效果(LATE),从而确定治疗截止点的当地平均治疗效果(LATE)。现有文献的主要重点是为LATE制定严格的估算方法。相反,我们考虑在RD设计下进行政策学习。我们开发了一种强有力的优化方法,以找到最佳治疗截止点,从而改进现有的治疗间隔点。在RD设计下,政策学习需要外推法。我们通过在通常用于估算LATE的平稳假设下部分确定反现实结果的有条件预期功能来解决这个问题。我们随后将相对于现状政策的最坏的遗憾降到最低。由此产生的新的治疗极限具有安全保障,使决策者能够限制其产生比现有截止点更差结果的可能性。我们开发了标准的单关点,我们通过开发一个双重的稳健的估测法,将拟议方法推广到多点RD设计中。我们用模拟式的假设为学习的实验性选择方法,最后运用了我们测算法的模拟方法。