Abstract
Linear programming (LP) is used in many machine learning applications, such as l1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real and synthetic data.
Original language | English |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 2020-December |
State | Published - 2020 |
Externally published | Yes |
Event | 34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online Duration: Dec 6 2020 → Dec 12 2020 |
Funding
Acknowledgements, We thank the anonymous reviewers for their helpful comments. AC and PD were partially supported by NSF FRG 1760353 and NSF CCF-BSF 1814041. HA was partially supported by BSF grant 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence.
Funders | Funder number |
---|---|
Amazon Graduate Fellowship in Artificial Intelligence | |
National Science Foundation | CCF-BSF 1814041, FRG 1760353, 2017698 |