Experimentation in online digital platforms is used to inform decision making. Specifically, the goal of many experiments is to optimize a metric of interest. Null hypothesis statistical testing can be ill-suited to this task, as it is indifferent to the magnitude of effect sizes and opportunity costs. Given access to a pool of related past experiments, we discuss how experimentation practice should change when the goal is optimization. We survey the literature on empirical Bayes analyses of A/B test portfolios, and single out the A/B Testing Problem (Azevedo et al., 2020) as a starting point, which treats experimentation as a constrained optimization problem. We show that the framework can be solved with dynamic programming and implemented by appropriately tuning $p$-value thresholds. Furthermore, we develop several extensions of the A/B Testing Problem and discuss the implications of these results on experimentation programs in industry. For example, under no-cost assumptions, firms should be testing many more ideas, reducing test allocation sizes, and relaxing $p$-value thresholds away from $p = 0.05$.
翻译:暂无翻译