As system parallelism at chip- and server-level increases, challenges that arose with network-level systems a decade ago, are now being encountered with these massively parallel systems that have become an important workhorse for Machine Learning workloads as well as Graph and Sparse workloads. To tackle the communication bottlenecks, recent works have introduced task-based parallelization schemes to accelerate graph search and sparse data-structure traversal, where some solutions scale up to thousands of processing units (PUs) on a single chip. However, existing communication schemes do not scale to larger than thousands of processing tiles. To address these challenges we propose Tascade, a system that offers hardware-supported, efficient and balanced reduction trees to reduce communication overheads in task-based parallelization schemes and scales up to a million PUs. Tascade achieves this by implementing an execution model utilizing proxy regions and cascading updates, along with a supporting hardware design that enables the execution of the reduction tree at the chip level. The Tascade approach reduces overall communication and improves load balancing. We evaluate six applications and four datasets to provide a detailed analysis of Tascade's performance, power, and traffic-reduction gains over prior work. Our parallelization of Breadth-First-Search with RMAT-26 across a million PUs, the largest of the literature, reaches 5305 GTEPS.
翻译:暂无翻译