Data Centers are huge power consumers, both because of the energy required for computation and the cooling needed to keep servers below thermal redlining. The most common technique to minimize cooling costs is increasing data room temperature. However, to avoid reliability issues, and to enhance energy efficiency, there is a need to predict the temperature attained by servers under variable cooling setups. Due to the complex thermal dynamics of data rooms, accurate runtime data center temperature prediction has remained as an important challenge. By using Gramatical Evolution techniques, this paper presents a methodology for the generation of temperature models for data centers and the runtime prediction of CPU and inlet temperature under variable cooling setups. As opposed to time costly Computational Fluid Dynamics techniques, our models do not need specific knowledge about the problem, can be used in arbitrary data centers, re-trained if conditions change and have negligible overhead during runtime prediction. Our models have been trained and tested by using traces from real Data Center scenarios. Our results show how we can fully predict the temperature of the servers in a data rooms, with prediction errors below 2 C and 0.5 C in CPU and server inlet temperature respectively.
翻译:由于计算所需的能量和将服务器维持在热红线以下所需的冷却,数据中心是巨大的电力消费者。最常用的尽量减少冷却成本的技术是提高数据室温度。然而,为了避免可靠性问题和提高能效,需要预测服务器在可变冷却装置下取得的温度。由于数据室复杂的热动态,准确运行时间数据中心温度预测仍是一项重大挑战。通过使用“渐进进化”技术,本文介绍了为数据中心生成温度模型的方法,以及在可变冷却装置下运行的CPU和内盘温度预测。相对于时间成本昂贵的计算流速动态技术,我们的模型不需要对该问题的具体知识,可以在任意的数据中心使用,如果条件发生变化,在运行时的预测期间的间接损失微不足道,则经过再培训。我们的模型已经通过使用真实数据中心情景的痕迹进行了培训和测试。我们的结果显示,我们如何能够完全预测数据中心服务器的温度,在CPU和服务器温度下分别低于2C和0.5C的预测误差。