Recent technological developments have led to big data processing, which resulted in significant computational difficulties when solving large-scale linear systems or inverting matrices. As a result, fast approximate iterative matrix inversion methodologies via Graphical Processing Unit (GPU) acceleration has been a subject of extensive research, to find solutions where classic and direct inversion are too expensive to conduct. Some currently used methods are Neumann Series (NS), Newton iteration (NI), Chebyshev Iteration (CI), and Successive Over-Relaxation, to cite a few. In this work, we develop a new iterative algorithm based off the NS, which we named 'Nested Neumann' (NN). This new methodology generalizes higher orders of the NI (or CI), by taking advantage of a computationally free iterative update of the preconditioning matrix as a function of a given 'inception depth'. It has been mathematically demonstrated that the NN: (i) convergences given the preconditioning satisfies the spectral norm condition of the NS, (ii) has an order of rate of convergence has been shown to be equivalent to the order (inception depth plus one), and (iii) has an optimal inception depth is an inception depth of one or preferably two, depending on RAM constraints. Furthermore, we derive an explicit formula for the NN, which is applicable to massive sparse matrices, given an increase in computational cost. Importantly, the NN finds an analytic equivalancy statement between the NS and the the NN (NI, CI, and higher orders), which is of importance for mMIMO systems. Finally, the NN method is applicable positive semi-definite matrices for matrix inversion, and applicable to any linear system (sparse, non-sparse, complex, etc.).
翻译:暂无翻译