GPU underutilization is a significant concern in many production deep learning clusters, leading to prolonged job queues and increased operational expenses. A promising solution to this inefficiency is GPU sharing, which improves resource utilization by allowing multiple workloads to execute concurrently on a single GPU. However, deploying GPU sharing in production settings faces critical obstacles due to the limitations of existing mechanisms, including high integration costs, inadequate performance isolation, and limited application compatibility. To address these issues, we introduce \emph{Tally}, a non-intrusive GPU sharing mechanism that provides robust performance isolation and comprehensive workload compatibility. The key to Tally's robust performance isolation capability lies in its fine-grained thread-block-level GPU kernel scheduling strategy, which allows the system to effectively mitigate interference caused by workload co-execution. We evaluate Tally on a diverse range of workloads and show that it incurs an average overhead of only $7.2\%$ on the $99^{th}$-percentile latency of high-priority inference tasks when executed concurrently with best-effort training workloads, compared to $188.9\%$ overhead exhibited by the state-of-the-art GPU sharing systems like TGS, while achieving over $80\%$ of TGS's system throughput.
翻译:暂无翻译