Hardware heterogeneity is here to stay for high-performance computing. Large-scale systems are currently equipped with multiple GPU accelerators per compute node and are expected to incorporate more specialized hardware. This shift in the computing ecosystem offers many opportunities for performance improvement; however, it also increases the complexity of programming for such architectures. This work introduces a runtime framework that enables effortless programming for heterogeneous systems while efficiently utilizing hardware resources. The framework is integrated within a distributed and scalable runtime system to facilitate performance portability across heterogeneous nodes. Along with the design, this paper describes the implementation and optimizations performed, achieving up to 300% improvement on a single device and linear scalability on a node equipped with four GPUs. The framework in a distributed memory environment offers portable abstractions that enable efficient inter-node communication among devices with varying capabilities. It delivers superior performance compared to MPI+CUDA by up to 20% for large messages while keeping the overheads for small messages within 10\%. Furthermore, the results of our performance evaluation in a distributed Jacobi proxy application demonstrate that our software imposes minimal overhead and achieves a performance improvement of up to 40%. This is accomplished by the optimizations at the library level as well as by creating opportunities to leverage application-specific optimizations like over-decomposition.
翻译:硬件差异性在此用于高性能计算。 大型系统目前配备了每个计算节点的多个 GPU 加速器, 预计将包含更多的专门硬件。 计算生态系统的这一转变为改进性能提供了许多机会; 但是, 也增加了这类结构的编程复杂性。 这项工作引入了一个运行时间框架, 使不同系统能够不费力地编程, 同时有效利用硬件资源。 框架被纳入一个分布式和可缩放的运行时间系统, 以便利跨不同节点的性能可移动性。 与设计一起, 本文描述了所完成的执行和优化, 在一个单一设备上实现了高达30 %的改进, 以及一个装有四个GPU的节点上的线性可缩放性。 分布式记忆环境中的框架提供了便携式的抽象信息, 使得不同功能的装置之间能够进行有效的互换通信。 与 MPI+CUDA相比, 高性能达20%, 同时将小信息的管理费控制在 10° 。 此外, 我们在分布式的 Jacobi 代理应用程序中的业绩评估的结果表明, 我们的软件在最低的间接费用应用上是最低的, 和在图书馆的升级上实现了一个更好的应用, 的升级, 通过优化到最优化, 达到最佳程度。</s>