Linear programming (LP) is an extremely useful tool which has been successfully applied to solve various problems in a wide range of areas, including operations research, engineering, economics, or even more abstract mathematical areas such as combinatorics. It is also used in many machine learning applications, such as $\ell_1$-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider both feasible and infeasible IPMs for the special case where the number of variables is much larger than the number of constraints. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the iterative solvers such as Conjugate Gradient or Chebyshev Iteration, provably guarantees that IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real-world and synthetic data.
翻译:线性编程(LP)是一个极为有用的工具,它被成功地用于解决广泛领域的各种问题,包括业务研究、工程、经济学,甚至更抽象的数学领域,如组合体等。它也用于许多机器学习应用,如$@ell_1美元常规SVM、基础跟踪、非偏向矩阵因子化等。内点方法(IPM)是理论和实践上解决LP的最受欢迎的方法之一。其根本复杂性主要在于解决每次迭代的线性方程式系统的成本。在本文件中,我们认为,对于变量数量远远大于限制数量的特殊案例,采用可行和不可行的IPMIPM的IPM IPM 方法。我们使用来自随机的线性 Algebra 工具,我们提出了一个先决条件技术,当与Conjugate Gradient 或 Chebyshev Iteration 等迭接合解器结合时,可以肯定地保证IPM 算法(适当调整为近似解算器错误的算法 ) 。在本文件中,我们认为,当变量数量远远大于限制数量时,则会结合到我们真实的、最理想的模拟数据解决方案。