Training Deep Neural Networks (DNNs) is a widely popular workload in both enterprises and cloud data centers. Existing schedulers for DNN training consider GPU as the dominant resource, and allocate other resources such as CPU and memory proportional to the number of GPUs requested by the job. Unfortunately, these schedulers do not consider the impact of a job's sensitivity to allocation of CPU, memory, and storage resources. In this work, we propose Synergy, a resource-sensitive scheduler for shared GPU clusters. Synergy infers the sensitivity of DNNs to different resources using optimistic profiling; some jobs might benefit from more than the GPU-proportional allocation and some jobs might not be affected by less than GPU-proportional allocation. Synergy performs such multi-resource workload-aware assignments across a set of jobs scheduled on shared multi-tenant clusters using a new near-optimal online algorithm. Our experiments show that workload-aware CPU and memory allocations can improve average JCT up to 3.4x when compared to traditional GPU-proportional scheduling.
翻译:深神经网络(DNN)培训是企业和云中数据中心广泛流行的工作量。 DNN培训的现有调度员将GPU视为主导资源,并分配其他的资源,如CPU和与工作所要求的GPU数量成比例的记忆。不幸的是,这些调度员没有考虑工作对CPU、记忆和存储资源分配的敏感性的影响。在这项工作中,我们提议协同,即对共享的GPU群集有资源敏感性的调度器。协同推断DNN对不同资源的敏感性,使用乐观的剖面图;有些工作可能获益于GPU-比例分配,有些工作可能不会受到低于GPU-比例分配的影响。协同工作利用新的近乎最佳的在线算法,在预定的共享多耐性集群中执行多资源工作量认知任务。我们的实验显示,与传统的GPU-比例的时间安排相比,工作量-CPU和记忆分配可以将平均JCT提高到3.4x。