Operating cloud service infrastructures requires high energy efficiency while ensuring a satisfactory service level. Motivated by data centers, we consider a workload routing and server speed control policy applicable to the system operating under fluctuating demands. Dynamic control algorithms are generally more energy-efficient than static ones. However, they often require frequent information exchanges between routers and servers, making the data centers' management hesitate to deploy these algorithms. This study presents a static routing and server speed control policy that could achieve energy efficiency similar to a dynamic algorithm and eliminate the necessity of frequent communication among resources. We take a robust queueing theoretic approach to response time constraints for the quality of service (QoS) conditions. Each server is modeled as a G/G/1 processor sharing queue, and the concept of uncertainty sets defines the domain of stochastic primitives. We derive an approximative upper bound of sojourn times from uncertainty sets and develop an approximative sojourn time quantile estimation method for QoS. Numerical experiments confirm the proposed static policy offers competitive solutions compared with the dynamic algorithm.
翻译:运行中的云服务基础设施要求高能效,同时确保令人满意的服务水平。在数据中心的推动下,我们考虑适用于在波动需求下运行的系统的工作量路由和服务器速度控制政策。动态控制算法通常比静态算法更具能效。但是,它们往往需要路由器和服务器之间频繁的信息交流,使数据中心管理层对部署这些算法犹豫不决。本研究提出了静态路由和服务器速度控制政策,可以实现能效,类似于动态算法,并消除资源之间频繁通信的必要性。我们采取了强有力的排队理论方法,应对服务质量(QOS)条件的响应时间限制。每个服务器都以G/G/1处理或共享排队模式建模,不确定性组的概念界定了沙质原始体的范围。我们从不确定性组中得出了近似的悬浮时间的上层,并为QOS开发了一种相匹配的 sojour时间微度估计方法。数量实验证实了拟议的静态政策提供了与动态算法相比的竞争性解决办法。</s>