A/B tests are the gold standard for evaluating digital experiences on the web. However, traditional "fixed-horizon" statistical methods are often incompatible with the needs of modern industry practitioners as they do not permit continuous monitoring of experiments. Frequent evaluation of fixed-horizon tests ("peeking") leads to inflated type-I error and can result in erroneous conclusions. We have released an experimentation service on the Adobe Experience Platform based on anytime-valid confidence sequences, allowing for continuous monitoring of the A/B test and data-dependent stopping. We demonstrate how we adapted and deployed asymptotic confidence sequences in a full featured A/B testing platform, describe how sample size calculations can be performed, and how alternate test statistics like "lift" can be analyzed. On both simulated data and thousands of real experiments, we show the desirable properties of using anytime-valid methods instead of traditional approaches.
翻译:A/B测试是评价网上数字经验的黄金标准,然而,传统的“固定正数”统计方法往往与现代业界从业人员的需要不相符合,因为它们不允许对实验进行持续监测。对固定正数测试(“peeking”)的频繁评估会导致型号I错误膨胀,并可能导致错误的结论。我们已经在Adobe经验平台上发布了基于随时有效的信任序列的实验服务,从而能够持续监测A/B测试和数据依赖停止。我们展示了我们如何在全功能A/B测试平台上调整和部署无药可依的信任序列,说明如何进行抽样规模计算,以及如何分析类似“提升”的替代测试统计数据。在模拟数据和数千项实际实验中,我们展示了使用随时有效的方法而不是传统方法的可取性。