Cloud applications are increasingly moving away from monolithic services to agile microservices-based deployments. However, efficient resource management for microservices poses a significant hurdle due to the sheer number of loosely coupled and interacting components. The interdependencies between various microservices make existing cloud resource autoscaling techniques ineffective. Meanwhile, machine learning (ML) based approaches that try to capture the complex relationships in microservices require extensive training data and cause intentional SLO violations. Moreover, these ML-heavy approaches are slow in adapting to dynamically changing microservice operating environments. In this paper, we propose PEMA (Practical Efficient Microservice Autoscaling), a lightweight microservice resource manager that finds efficient resource allocation through opportunistic resource reduction. PEMA's lightweight design enables novel workload-aware and adaptive resource management. Using three prototype microservice implementations, we show that PEMA can find close to optimum resource allocation and save up to 33% resource compared to the commercial rule-based resource allocations.
翻译:云层应用正日益从单一服务转向灵活的微观服务部署。然而,由于松散和互动的组件数量众多,对微观服务进行高效的资源管理构成了重大障碍。各种微观服务之间的相互依存关系使得现有的云层资源自动升级技术无效。与此同时,试图捕捉微观服务复杂关系的机器学习(ML)方法需要广泛的培训数据并导致故意违反SLO。此外,这些ML-重型方法在适应动态变化的微观服务操作环境方面进展缓慢。我们在本文件中提议PEMA(实用高效微服务自动升级),即一个通过机会性资源削减找到高效资源分配的轻量微服务资源管理者。PEMA的轻量级设计使得新的工作量能和适应性资源管理成为新的。我们用三个原型的微服务实施方法表明,PEMA可以找到最优化的资源分配,并比基于规则的商业资源分配节省高达33%的资源。