The computation of a matrix function $f(A)$ is an important task in scientific computing appearing in machine learning, network analysis and the solution of partial differential equations. In this work, we use only matrix-vector products $x\mapsto Ax$ to approximate functions of sparse matrices and matrices with similar structures such as sparse matrices $A$ themselves or matrices that have a similar decay property as matrix functions. We show that when $A$ is a sparse matrix with an unknown sparsity pattern, techniques from compressed sensing can be used under natural assumptions. Moreover, if $A$ is a banded matrix then certain deterministic matrix-vector products can efficiently recover the large entries of $f(A)$. We describe an algorithm for each of the two cases and give error analysis based on the decay bound for the entries of $f(A)$. We finish with numerical experiments showing the accuracy of our algorithms.
翻译:暂无翻译