With the development of technology, parallel computing applications have been commonly executed in large data centers. These parallel computing applications include the computation phase and communication phase, and work is completed by repeatedly executing these two phases. However, due to the ever-increasing computing demands, large data centers are burdened with massive communication demands. Coflow is a recently proposed networking abstraction to capture communication patterns in data-parallel computing frameworks. This paper focuses on the coflow scheduling problem in identical parallel networks, where the goal is to minimize makespan, the maximum completion time of coflows. The coflow scheduling problem in huge data center is considered one of the most significant $NP$-hard problems. In this paper, coflow can be considered as either a divisible or an indivisible case. Distinct flows in a divisible coflow can be transferred through different network cores, while those in an indivisible coflow can only be transferred through the same network core. In the divisible coflow scheduling problem, this paper proposes a $(3-\tfrac{2}{m})$-approximation algorithm, and a $(\tfrac{8}{3}-\tfrac{2}{3m})$-approximation algorithm, where $m$ is the number of network cores. In the indivisible coflow scheduling problem, this paper proposes a $(2m)$-approximation algorithm. Finally, we simulate our proposed algorithm and Weaver's [Huang \textit{et al.}, In 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 1071-1081, 2020.] and compare the performance of our algorithms with that of Weaver's.
翻译:随着技术的发展,在大型数据中心中通常会执行平行计算应用程序。这些平行计算应用程序包括计算阶段和通信阶段,并且通过反复执行这两个阶段而完成工作。然而,由于计算需求不断增加,大型数据中心负担着巨大的通信需求。 串流是最近提议的一种网络抽象, 以捕捉数据平行计算框架中的通信模式。 本文侧重于相同平行网络的联流调度问题, 目标是最小化流, 最大循环流的完成时间。 大型数据中心的联流调度问题被认为是最大的美元硬问题之一 。 在本文中, 串流可被视为可变或不可分割的个案。 可变化的串流可通过不同的网络核心转移, 而不可分割的联流只能通过相同的网络核心转移。 在可变的串流调度问题中, 目标是最小化的网络流( 3\ tflack) 。 本文提出了 美元( 3\ tflack) 和 美元( tal- transil) IMLO_3\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\