General Matrix Multiplication or GEMM kernels take center place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA's Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries' lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers' needs and Julia's features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is mostly on par with, and in some cases even exceeds, the libraries, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.
翻译:通用矩阵乘法或 GEMM 内核在高性能计算和机器学习中占据中心位置。 最近的 NVIDIA GPU 包括 GEMM 加速器, 如 NVIDIA 的 Tensor Cores 等 GENIDA 加速器。 它们的利用受到两种语言问题的阻碍: 它需要低层次的编程程序, 意味着程序员生产率低, 或者使用只提供有限组件的图书馆。 因为固定组件的修改算法常常引入间接费用, 图书馆缺乏灵活性, 限制了探索新算法的自由。 因此, 使用 GNIDIA 的研究人员无法同时享受编程生产率、 高性能和研究灵活性。 在本文中, 我们解决了这个问题。 我们向科学朱利亚编程语言中的 GEMM 程序展示了三套抽象和界面界面。 界面和抽象设计是为了研究人员的需求和Julia 的特征, 从而在不支付业绩价格的情况下, 以多种不同方式将基本 GEMM 的算法进行充分区分, 。 将我们的 GEMMM 与一些最先进的图书馆的编程、 甚至是CLAS 和CLA 格式, 我们在单一的编程中表现是没有灵活性。 CUDUDLA 。