This work introduces a model-based framework that reveals the idle opportunity of modern servers running latency-critical applications. Specifically, three queuing models, M/M/1, cxM/M/1, and M/M/c, are used to estimate the theoretical idle time distribution at the CPU core and system (package) level. A comparison of the actual idleness of a real server and that from the theoretical models reveals significant missed opportunities to enter deep idle states. This inefficiency is attributed to the idle-governor inaccuracy and the high latency to transition to/from legacy deep-idle states. The proposed methodology offers the means for an early-stage design exploration and insights into idle time behavior and opportunities for varying server system configurations and load.
翻译:暂无翻译