With constrained resources, what, where, and how to cache at the edge is one of the key challenges for edge computing systems. The cached items include not only the application data contents but also the local caching of edge services that handle incoming requests. However, current systems separate the contents and services without considering the latency interplay of caching and queueing. Therefore, in this paper, we propose a novel class of stochastic models that enable the optimization of content caching and service placement decisions jointly. We first explain how to apply layered queueing networks (LQNs) models for edge service placement and show that combining this with genetic algorithms provides higher accuracy in resource allocation than an established baseline. Next, we extend LQNs with caching components to establish a joint modeling method for content caching and service placement (JCSP) and present analytical methods to analyze the resulting model. Finally, we simulate real-world Azure traces to evaluate the JCSP method and find that JCSP achieves up to 35% improvement in response time and 500MB reduction in memory usage than baseline heuristics for edge caching resource allocation.
翻译:由于资源、什么、哪里和如何在边缘缓存是边缘计算系统的关键挑战之一。缓存项目不仅包括应用程序数据内容,而且包括处理收到请求的边端服务的本地缓存。然而,目前的系统将内容和服务分开,而不考虑缓存和排队之间的延缓性相互作用。因此,在本文件中,我们提议了新型的随机模型,使内容缓存和服务放置决定能够联合优化。我们首先解释如何将分层排队网络(LQNs)模型应用于边端服务定位,并表明与遗传算法相结合,资源配置的准确性高于既定基线。接下来,我们扩大带缓存组件的LQNs,以建立内容缓存和服务放置的联合模型(JCSP),并提出分析结果模型的分析方法。最后,我们模拟真实世界的Azure痕迹,以评估边端服务配置方法,发现JCSP在反应时间上达到35%的改进,记忆用量减少500MB。