Cloud applications are increasingly moving away from monolithic services to agile microservices-based deployments. However, efficient resource management for microservices poses a significant hurdle due to the sheer number of loosely coupled and interacting components. The interdependencies between various microservices make existing cloud resource autoscaling techniques ineffective. Meanwhile, machine learning (ML) based approaches that try to capture the complex relationships in microservices require extensive training data and cause intentional SLO violations. Moreover, these ML-heavy approaches are slow in adapting to dynamically changing microservice operating environments. In this paper, we propose PEMA (Practical Efficient Microservice Autoscaling), a lightweight microservice resource manager that finds efficient resource allocation through opportunistic resource reduction. PEMA's lightweight design enables novel workload-aware and adaptive resource management. Using three prototype microservice implementations, we show that PEMA can find efficient resource allocation and save up to 33% resource compared to the commercial rule-based resource allocations.
翻译:云层应用正日益从单一服务转向灵活的微观服务部署。然而,由于松散和互动的组件数量众多,对微观服务进行高效的资源管理构成了重大障碍。各种微观服务之间的相互依存关系使得现有云层资源自动升级技术无效。与此同时,试图捕捉微观服务复杂关系的机器学习(ML)方法需要广泛的培训数据,并导致故意违反SLO。此外,这些ML-重型方法在适应动态变化的微观服务操作环境方面进展缓慢。在本文件中,我们提议PEMA(实用高效微服务自动升级),即一个通过机会性资源削减找到高效资源分配的轻质微服务资源管理者。PEMA的轻量级设计使新的工作量和适应性资源管理变得新颖。我们利用三个原型的微观服务实施,表明PEMA能够找到高效的资源分配,并比基于商业规则的资源分配节省高达33%的资源。