Model autoscaling is the key mechanism to achieve serverless model-as-a-service, but it faces a fundamental trade-off between scaling speed and storage/memory usage to cache parameters, and cannot meet frequent scaling requirements across multiple hosts. The key problem is that data plane performance is slow, and scaled instances remain stopped while parameters are loading. We first show that data plane can be made fast with no/O(1) caching by loading parameters through the compute network between GPUs because: (1) its speed is comparable host cache and is underutilized; (2) scaling multiple instances requires no or O(1) caching with network-optimized multicast. Second, autoscaling can be made live by breaking the scaling abstraction from a coarse-grained instance-level to a fine-grained layer-level. This allows us to offload the layer computation from the overloaded serving instances to the scaled instance with cooperative execution, thus handles cases even when the compute network is not sufficiently fast. Our system BLITZSCALE reduces the serving tail latencies by up to 86% without caching, and we achieve comparable performance (or even better) to an optimal setup where all the parameters are cached at all the host for autoscaling.
翻译:暂无翻译