Over the past decade, most technology companies and a growing number of conventional firms have adopted online experimentation (or A/B testing) into their product development process. Initially, A/B testing was deployed as a static procedure in which an experiment was conducted by randomly splitting half of the users to see the control-the standard offering-and the other half the treatment-the new version. The results were then used to augment decision-making around which version to release widely. More recently, as experimentation has matured, firms have developed a more dynamic approach to experimentation in which a new version (the treatment) is gradually released to a growing number of units through a sequence of randomized experiments, known as iterations. In this paper, we develop a theoretical framework to quantify the value brought on by such dynamic or iterative experimentation. We apply our framework to seven months of LinkedIn experiments and show that iterative experimentation led to an additional 20% improvement in one of the firm's primary metrics.
翻译:在过去的十年中,大多数技术公司和越来越多的常规公司在其产品开发过程中采用了在线实验(或A/B测试),最初,A/B测试作为一种静态程序,由一半用户随机分解进行实验,以查看控制-标准提供-和治疗-新版本的另外一半,结果然后用来扩大围绕该版本广泛发布的决策。最近,随着实验的成熟,公司对实验采取了一种更有活力的方法,通过一系列随机化试验(即迭代)逐渐将新版本(治疗)释放给越来越多的单位。在本文中,我们制定了一个理论框架,以量化这种动态或迭接实验带来的价值。我们把框架应用到LinkedIn实验的七个月中,并表明反复实验使该公司的主要指标之一增加了20%的改进。