Large-scale distributed training in production datacenters constitutes a challenging workload bottlenecked by network communication. In response, both major industry players (e.g., Ultra Ethernet Consortium) and parts of academia have surprisingly, and almost unanimously, agreed that packet spraying is \emph{necessary} to improve the performance of large-scale distributed training workloads. In this paper, we challenge this prevailing belief and pose the question: \emph{How close can singlepath transport come to matching the performance of packet spraying?} We demonstrate that singlepath transport (from a NIC's perspective) is sufficient and can perform nearly as well as ideal packet spraying, particularly in the context of distributed training in CLOS-based topologies. Our assertion is based on four key observations about workloads driven by collective communication patterns: \emph{(i)} flow sizes are known upon arrival, \emph{(ii)} flow sizes are equal within each step of a collective, \emph{(iii)} the completion time of a collective is more critical than individual flow completion times, and \emph{(iv)} flows can be \emph{split} upon arrival to control load balancing directly from the application layer. We present Ethereal, a simple distributed load balancing algorithm that opportunistically splits flows and assigns paths to each flow in a transparent manner, requiring little to no changes to existing RDMA NICs. Our evaluation, spanning a wide range of collective communication algorithms and GPT models using Astra-Sim, shows that Ethereal significantly reduces the completion times by up to $30\%$ compared to packet spraying and by up to $40\%$ compared to REPS, even under link failures. This paper offers an alternative perspective for developing next-generation transport protocols tailored to large-scale distributed training.
翻译:暂无翻译