The source code of Function as a Service (FaaS) applications is constantly being refined. To detect if a source code change introduces a significant performance regression, the traditional benchmarking approach evaluates both the old and new function version separately using numerous artificial requests. In this paper, we describe a wrapper approach that enables the Randomized Multiple Interleaved Trials (RMIT) benchmark execution methodology in FaaS environments and use bootstrapping percentile intervals to derive more accurate confidence intervals of detected performance changes. We evaluate our approach using two public FaaS providers, an artificial performance issue, and several benchmark configuration parameters. We conclude that RMIT can shrink the width of confidence intervals in the results from 10.65% using the traditional approach to 0.37% using RMIT and thus enables a more fine-grained performance change detection.
翻译:暂无翻译