A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance. In this paper, we provide a novel cross-validation-like methodology to address this challenge. The key insight of our procedure is that the noisy (but unbiased) difference-of-means estimate can be used as a ground truth "label" on a portion of the RCT, to test the performance of an estimator trained on the other portion. We combine this insight with an aggregation scheme, which borrows statistical strength across a large collection of RCTs, to present an end-to-end methodology for judging an estimator's ability to recover the underlying treatment effect. We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain. In the corpus of AB tests at Amazon, we highlight the unique difficulties associated with recovering the treatment effect due to the heavy-tailed nature of the response variables. In this heavy-tailed setting, our methodology suggests that procedures that aggressively downweight or truncate large values, while introducing bias lower the variance enough to ensure that the treatment effect is more accurately estimated.
翻译:在随机控制试验(RCTs)中,对治疗效果的客观评估(TE)估计值的一个中心障碍是缺乏检验其绩效的地面真实性(或验证集),在本文中,我们提供了一种全新的交叉验证方法来应对这一挑战。我们程序的关键洞察力是,在RCT的某一部分上,可以使用吵闹(但不带偏见)的差别估计值作为地面真实性“标签”,以测试在另一部分上受过训练的测算员的性能。我们把这一洞察与一个集成计划结合起来,这个计划在大量RCT中借用了统计实力,以提出一种最终到最终的方法来判断一个估计一个估计者恢复基本治疗效果的能力。我们评估亚马逊供应链中实施的709个RCTs的方法。在亚马逊亚马逊的AB测试中,我们强调由于反应变数的复杂性而恢复治疗效果的独特困难。在这种复杂的情况下,我们的方法表明,进取力过低或扭曲大值的程序是准确的,同时提出偏差程度的估计数。