Low Rank Approximation (LRA) of an m-by-n matrix is a hot research subject, fundamental for Matrix and Tensor Computations and Big Data Mining and Analysis. Computations with LRA can be performed at sublinear cost -- by using much fewer than mn memory cells and arithmetic operations, but can we compute LRA at sublinear cost? Yes and no. No, because spectral, Frobenius, and all other norms of the error matrix of LRA output by any sublinear cost deterministic or randomized algorithm exceed their minimal values for LRA by infinitely large factors for the worst case input and even for the inputs from the small families of our Appendix. Yes, because for about two decades Cross-Approximation (C-A) iterations, running at sublinear cost, have been consistently computing close LRA worldwide. We provide new insight into that "yes" and "no" coexistence by identifying C-A iterations as recursive sketching algorithms for LRA that use sampling test matrices and run at sublinear cost. As we prove in good accordance with our numerical tests, already at a single recursive step they compute close LRA. except for a narrow class of hard inputs, which tends to shrink in the recursive process. We also discuss enhancing the power of sketching by means of using leverage scores.
翻译:低端对准矩阵(LART)是一个热题研究主题,对于矩阵和线性比较以及大数据开采和分析来说,对于矩阵和大数据采集和分析至关重要。与LARC的比较可以以亚线性成本进行 -- -- 使用远小于 mn 记忆细胞和算术操作,但我们可以以亚线性成本计算上帝军?是和否。 否,因为光谱、Frobenius和上帝军输出错误矩阵的所有其他规范,通过任何亚线性成本确定性或随机化算法,其最低值超过上帝军的最低值,对最坏案例投入甚至对我们附录中小家庭的投入来说都是无限大的因素。是的,因为大约20年来,通过使用远端存储细胞细胞细胞细胞的反复计算法(C-A)反复计算,一直以亚线性成本计算到全球范围内的LARC。我们通过确定C-A的循环算法的循环性算法,对上帝军使用抽样测试矩阵和子线性成本,从而超越其最低值值值值值值值值。我们用精确的精确的计算方法,我们用一个精确的变数级的计算方法,在一次不断的递化的递化的递化的递化的递化的递化的递变变法中也证明了。