GPU-enhanced architectures are now dominant in HPC systems, but message-passing communication involving GPUs with MPI has proven to be both complex and expensive, motivating new approaches that lower such costs. We compare and contrast stream/graph- and kernel-triggered MPI communication abstractions, whose principal purpose is to enhance the performance of communication when GPU kernels create or consume data for transfer through MPI operations. Researchers and practitioners have proposed multiple potential APIs for stream and/or kernel triggering that span various GPU architectures and approaches, including MPI-4 partitioned point-to-point communication, stream communicators, and explicit MPI stream/queue objects. Designs breaking backward compatibility with MPI are duly noted. Some of these strengthen or weaken the semantics of MPI operations. A key contribution of this paper is to promote community convergence toward a stream- and/or kernel-triggering abstraction by highlighting the common and differing goals and contributions of existing abstractions. We describe the design space in which these abstractions reside, their implicit or explicit use of stream and other non-MPI abstractions, their relationship to partitioned and persistent operations, and discuss their potential for added performance, how usable these abstractions are, and where functional and/or semantic gaps exist. Finally, we provide a taxonomy for stream- and kernel-triggered abstractions, including disambiguation of similar semantic terms, and consider directions for future standardization in MPI-5.
翻译:暂无翻译