Hardware heterogeneity is here to stay for high-performance computing. Large-scale systems are currently equipped with multiple GPU accelerators per compute node and are expected to incorporate more specialized hardware. This shift in the computing ecosystem offers many opportunities for performance improvement; however, it also increases the complexity of programming for such architectures. This work introduces a runtime framework that enables effortless programming for heterogeneous systems while efficiently utilizing hardware resources. The framework is integrated within a distributed and scalable runtime system to facilitate performance portability across heterogeneous nodes. Along with the design, this paper describes the implementation and optimizations performed, achieving up to 300% improvement on a single device and linear scalability on a node equipped with four GPUs. The framework in a distributed memory environment offers portable abstractions that enable efficient inter-node communication among devices with varying capabilities. It delivers superior performance compared to MPI+CUDA by up to 20% for large messages while keeping the overheads for small messages within 10%. Furthermore, the results of our performance evaluation in a distributed Jacobi proxy application demonstrate that our software imposes minimal overhead and achieves a performance improvement of up to 40%. This is accomplished by utilizing the optimizations mentioned earlier, as well as implementing over-decomposition in a manner that ensures performance portability.
翻译:硬件差异性在此用于高性能计算。 大型系统目前配备了每个计算节点的多个 GPU 加速器, 预计将包含更多的专门硬件。 计算生态系统的这一转变为改进性能提供了许多机会; 但是, 也增加了这类结构的编程复杂性。 这项工作引入了一个运行时间框架, 使不同系统能够不费力地编程, 同时有效利用硬件资源。 框架被纳入一个分布式和可缩放的运行时间系统, 以便利各异节点的性能可移动性。 在设计的同时,本文件还描述了所完成的执行和优化, 在一个单一设备上实现了高达30 % 的改进, 以及一个装有四个GPU的节点上的线性可缩放性。 在分布式的记忆环境中中, 框架提供了可移动式的抽象信息, 使得不同功能的装置之间能够进行有效的互换通信。 与 MPI+ CUDA相比, 其性能高达20%, 同时将小信息的管理费控制在10 % 。 此外, 我们在分布式的 Jacobi 代用应用程序中的业绩评价的结果表明, 我们的软件要求最低的间接费用, 实现最低性管理, 并且提前实现优化。</s>