The development of exascale and post-exascale HPC and AI systems integrates thousands of CPUs and specialized accelerators, making energy optimization critical as power costs rival hardware expenses. To reduce consumption, frequency and voltage scaling techniques are widely used, but their effectiveness depends on adapting to application demands in real-time. However, frequency scaling incurs a switching latency, impacting the responsiveness of dynamic tuning approaches. We propose a methodology to systematically evaluate the frequency switching latency of accelerators, with an implementation for CUDA. Our approach employs an artificial iterative workload designed for precise runtime measurements at different frequencies. The methodology consists of three phases: (1) measuring workload execution time across target frequencies, (2) determining switching latency by tracking the transition from an initial to a target frequency, and (3) filtering out outliers caused by external factors such as CUDA driver management or CPU interruptions. A robust statistical system ensures accuracy while minimizing execution time. We demonstrate this methodology on three Nvidia GPUs - GH200, A100, and RTX Quadro 6000 - revealing significant variations in switching latency. These findings are crucial for designing energy-efficient runtime systems, helping determine optimal frequency change rates and avoiding transitions with excessive overhead.
翻译:暂无翻译