The recent growth of Artificial Intelligence (AI), particularly large language models, requires energy-demanding high-performance computing (HPC) data centers, which poses a significant burden on power system capacity. Scheduling data center computing jobs to manage power demand can alleviate network stress with minimal infrastructure investment and contribute to fast time-scale power system balancing. This study, for the first time, comprehensively analyzes the capability and cost of grid flexibility provision by GPU-heavy AI-focused HPC data centers, along with a comparison with CPU-heavy general-purpose HPC data centers traditionally used for scientific computing. A data center flexibility cost model is proposed that accounts for the value of computing. Using real-world computing traces from 7 AI-focused HPC data centers and 7 general-purpose HPC data centers, along with computing prices from 3 cloud platforms, we find that AI-focused HPC data centers can offer greater flexibility at 50% lower cost compared to general-purpose HPC data centers for a range of power system services. By comparing the cost to flexibility market prices, we illustrate the financial profitability of flexibility provision for AI-focused HPC data centers. Finally, our flexibility and cost estimates can be scaled using parameters of other data centers through algebraic operations, avoiding the need for re-optimization.
翻译:暂无翻译