Algorithms for numerical tasks in finite precision simultaneously seek to minimize the number of floating point operations performed, and also the number of bits of precision required by each floating point operation. This paper presents an algorithm for Hermitian diagonalization requiring only $\lg(1/\varepsilon)+O(\log(n)+\log\log(1/\varepsilon))$ bits of precision where $n$ is the size of the input matrix and $\varepsilon$ is the target error. Furthermore, it runs in near matrix multiplication time. In the general setting, the first complete analysis of the stability of a near matrix multiplication time algorithm for diagonalization is that of Banks et al. [BGVKS20]. They exhibit an algorithm for diagonalizing an arbitrary matrix up to $\varepsilon$ backward error using only $O(\log^4(n/\varepsilon)\log(n))$ bits of precision. This work focuses on the Hermitian setting, where we determine a dramatically improved bound on the number of bits needed. In particular, the result is close to providing a practical bound. The exact bit count depends on the specific implementation of matrix multiplication and QR decomposition one wishes to use, but if one uses suitable $O(n^3)$-time implementations, then for $\varepsilon=10^{-15},n=4000$, we show 92 bits of precision suffice (and 59 are necessary). By comparison, the same parameters in [BGVKS20] does not even show that 682,916,525,000 bits suffice.
翻译:暂无翻译