Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We will show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss some new an old bounds and include several examples where an expected error norm can be computed exactly.
翻译:半确定性半确定基质通常作为统计中最不平方问题的正常矩阵或机器学习和近似理论中的内核基质矩阵发生。 它们一般是大而密集的。 因此,用这种矩阵解决系统的算法可能非常昂贵。 降低计算复杂性的核心想法是将矩阵与低等级的矩阵相近。 最佳和完全理解的选择是基于矩阵的乙基值分解。 不幸的是,这是计算成本很高的。 廉价方法基于高斯消化方法, 但它们需要对流。 我们将显示变量矩阵理论如何为基于量抽样的平均错误提供明确的错误公式。 公式导致基本对称多数值在电子数值上的比值。 我们讨论一些新界限, 并包括一些可以精确计算预期错误规范的例子 。