Sparse matrices have recently played a significant and impactful role in scientific computing, including artificial intelligence-related fields. According to historical studies on sparse matrix--vector multiplication (SpMV), Krylov subspace methods are particularly sensitive to the effects of round-off errors when using floating-point arithmetic. By employing multiple-precision linear computation, convergence can be stabilized by reducing these round-off errors. In this paper, we present the performance of our accelerated SpMV using SIMD instructions, demonstrating its effectiveness through various examples, including Krylov subspace methods.
翻译:暂无翻译