As cloud applications shift from monoliths to loosely coupled microservices, application developers must decide how many compute resources (e.g., number of replicated containers) to assign to each microservice within an application. This decision affects both (1) the dollar cost to the application developer and (2) the end-to-end latency perceived by the application user. Today, individual microservices are autoscaled independently by adding VMs whenever per-microservice CPU or memory utilization crosses a configurable threshold. However, an application user's end-to-end latency consists of time spent on multiple microservices and each microservice might need a different number of VMs to achieve an overall end-to-end latency. We present COLA, an autoscaler for microservice-based applications, which collectively allocates VMs to microservices with a global goal of minimizing dollar cost while keeping end-to-end application latency under a given target. Using 5 open-source applications, we compared COLA to several utilization and machine learning based autoscalers. We evaluate COLA across different compute settings on Google Kubernetes Engine (GKE) in which users manage compute resources, GKE standard, and a new mode of operation in which the cloud provider manages compute infrastructure, GKE Autopilot. COLA meets a desired median or tail latency target on 53 of 63 workloads where it provides a cost reduction of 19.3%, on average, over the next cheapest autoscaler. COLA is the most cost effective autoscaling policy for 48 of these 53 workloads. The cost savings from managing a cluster with COLA result in COLA paying for its training cost in a few days. On smaller applications, for which we can exhaustively search microservice configurations, we find that COLA is optimal for 90% of cases and near optimal otherwise.
翻译:随着云层应用从单板转向松散的微服务,应用程序开发者必须决定在应用程序中为每个微服务分配多少计算资源(例如复制容器的数量),这个决定既影响到应用程序开发者的美元成本,又影响到应用程序用户所认为的端到端的悬浮。今天,单个微服务通过在每部微服务CPU或内存利用跨越一个可配置的门槛时添加 VMs而自动升级。然而,应用程序用户端到端的悬浮包括用于多个微服务和每个微服务的时间。为了实现整个端到端的延缩度,可能需要不同数量的 VMS(例如复制容器的数量)来计算多少资源。我们介绍的是应用程序开发软件开发者美元到端的美元成本,而软件端到端到端的LLL(例如,用户们在GOOOOOOOOOVAL) 以最高级的OFMLA AFLLA 成本成本成本,在GOOLOVAL上管理了它最高级的OLOLOLA 。