Content-delivery applications can achieve scalability and reduce wide-area network traffic using geographically distributed caches. However, each deployed cache has an associated cost, and under time-varying request rates (e.g., a daily cycle) there may be long periods when the request rate from the local region is not high enough to justify this cost. Cloud computing offers a solution to problems of this kind, by supporting dynamic allocation and release of resources. In this paper, we analyze the potential benefits from dynamically instantiating caches using resources from cloud service providers. We develop novel analytic caching models that accommodate time-varying request rates, transient behavior as a cache fills following instantiation, and selective cache insertion policies. Within the context of a simple cost model, we then develop bounds and compare policies with optimized parameter selections to obtain insights into key cost/performance tradeoffs. We find that dynamic cache instantiation can provide substantial cost reductions, that potential reductions strongly dependent on the object popularity skew, and that selective cache insertion can be even more beneficial in this context than with conventional edge caches. Finally, our contributions also include accurate and easy-to-compute approximations that are shown applicable to LRU caches under time-varying workloads.
翻译:内容交付应用可以实现可缩放,并使用地理分布的缓存来减少广域网络流量。然而,每个部署的缓存都有相关的成本,而且根据时间变化的索要率(例如,每天周期),在相当长的时期内,当地地区的请求率不足以证明这一成本是合理的。 云计算通过支持动态分配和释放资源,为这类问题提供了解决办法。 在本文件中,我们利用云服务供应商的资源分析动态即时缓存的潜在好处。 我们开发了新的分析缓存模式,以适应时间变化的请求率、即时填补缓存后的瞬变换行为以及选择性缓存插入政策。 在简单成本模式中,我们随后制定界限,比较政策与最佳参数选择,以获得关键成本/绩效权衡的洞察力。我们发现动态缓存即时能带来巨大的成本削减,潜在减少在很大程度上取决于对象的受热度,而选择性缓存插入在这方面可能比常规边端缓存更有利。 最后,我们的贡献还包括精确和容易应用的缓存量。