Online controlled experiments (i.e., A/B tests) are a critical tool used by businesses with digital operations to optimize their products and services. These experiments routinely track information related to various business metrics, each of which summarizes a different aspect of how users interact with an online platform. Although multiple metrics are commonly tracked, this information is often not well utilized; multiple metrics are often aggregated into a single composite measure, losing valuable information, or strict family-wise error rate adjustments are imposed, leading to reduced power. In this paper, we propose an economical framework to design Bayesian A/B tests while controlling both power and the false discovery rate (FDR). Selecting optimal decision thresholds to control power and the FDR typically relies on intensive simulation at each sample size considered. Our framework efficiently recommends optimal sample sizes and decision thresholds for Bayesian A/B tests that satisfy criteria for the FDR and average power. Our approach is efficient because we leverage new theoretical results to obtain these recommendations using simulations conducted at only two sample sizes. Our methodology is illustrated using an example based on a real A/B test involving several metrics.
翻译:暂无翻译