Tensor Core is a mixed-precision matrix-matrix multiplication unit on NVIDIA GPUs with a theoretical peak performance of more than 300 TFlop/s on Ampere architectures. Tensor Cores were developed in response to the high demand of dense matrix multiplication from machine learning. However, many applications in scientific computing such as preconditioners for iterative solvers and low-precision Fourier transforms can exploit these Tensor Cores. To compute a matrix multiplication on Tensor Cores, we need to convert input matrices to half-precision, which results in loss of accuracy. To avoid this, we can keep the mantissa loss in the conversion using additional half-precision variables and use them for correcting the accuracy of matrix-matrix multiplication. Even with this correction, the use of Tensor Cores yields higher throughput compared to FP32 SIMT Cores. Nevertheless, the correcting capability of this method alone is limited, and the resulting accuracy cannot match that of a matrix multiplication on FP32 SIMT Cores. We address this problem and develop a high accuracy, high performance, and low power consumption matrix-matrix multiplication implementation using Tensor Cores, which exactly matches the accuracy of FP32 SIMT Cores while achieving superior throughput. The implementation is based on NVIDIA's CUTLASS. We found that the key to achieving this accuracy is how to deal with the rounding inside Tensor Cores and underflow probability during the correction computation. Our implementation achieves 51TFlop/s for a limited exponent range using FP16 Tensor Cores and 33TFlop/s for full exponent range of FP32 using TF32 Tensor Cores on NVIDIA A100 GPUs, which outperforms the theoretical FP32 SIMT Core peak performance of 19.5TFlop/s.
翻译:光谱核心是一个混合精密矩阵矩阵矩阵矩阵化乘数单位, 它在 NVIDIA GPU 上, 理论峰值性能在 Ampere 结构上超过 300 TFlop/s。 光谱核心是针对机器学习中密集矩阵倍增的高需求而开发的。 然而, 许多科学计算应用, 如迭代解答器的先决条件和低精度 Fleier 变换, 可以利用这些Tesor核心。 要计算 Tansor Corecor Cores 的矩阵乘数乘数, 我们需要将输入矩阵转换成半精确度, 从而导致准确性下降。 要避免这一点, 我们可以使用额外的半精度变量来保持曼特萨的转换损失, 并使用它们来纠正模型矩阵乘数的精度。 但是, 仅此方法的校正能力是有限的, 因此, IMFC IMTFFC 核心的计算结果无法匹配半精度。 我们使用这个问题, 并且通过IMFS IM IM 实现高精度执行, IMFS 的精度, IMFLLLLM 。