During early stages of CPU design, benchmarks can only run on simulators to evaluate CPU performance. However, most big data benchmarks are too huge at code size scale, which causes them to be unable to finish running on simulators at an acceptable time cost. Moreover, big data benchmarks usually need complex software stacks to support their running, which is hard to be ported on simulators. Proxy benchmarks, without long running times and complex software stacks, have the same micro-architectural metrics as real benchmarks, which means they can represent real benchmarks' micro-architectural characteristics. Therefore, proxy benchmarks can replace real benchmarks to run on simulators to evaluate the CPU performance. The biggest challenge is how to guarantee that the proxy benchmarks have exactly the same micro-architectural metrics as real benchmarks when the number of micro-architectural metrics is very large. To deal with this challenge, we propose a linear combination-based proxy benchmark generation methodology that transforms this problem into solving a linear equation system. We also design the corresponding algorithms to ensure the linear equation is astringency, which means that although sometimes the linear equation system doesn't have a unique solution, the algorithm can find the best solution by the non-negative least square method. We generate fifteen proxy benchmarks and evaluate their running time and accuracy in comparison to corresponding real benchmarks for Mysql and RockDB. On the typical Intel Xeon platform, the average running time is 1.62s, and the average accuracy of every micro-architectural metric is over 92%, while the longest running time of real benchmarks is nearly 4 hours. We also conduct two case studies that demonstrate that our proxy benchmarks are consistent with real benchmarks both before and after prefetch or Hyper-Threading is turned on.
翻译:暂无翻译