Provable accelerated gradient method for nonconvex low rank optimization
From MaRDI portal
Publication:2303662
DOI10.1007/s10994-019-05819-wzbMath1446.90138arXiv1702.04959OpenAlexW2963459475WikidataQ127616359 ScholiaQ127616359MaRDI QIDQ2303662
Publication date: 4 March 2020
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1702.04959
Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Combinatorial optimization (90C27)
Related Items
CMD: controllable matrix decomposition with global optimization for deep neural network compression ⋮ Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares
- Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Minimizing finite sums with the stochastic average gradient
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Reduced rank vector generalized linear models for feature extraction
- A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization
- Introductory lectures on convex optimization. A basic course.
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm
- Linear convergence of first order methods for non-strongly convex optimization
- Sparse PCA: optimal rates and adaptive estimation
- Local minima and convergence in low-rank semidefinite programming
- Exact matrix completion via convex optimization
- Reduced-rank vector generalized linear models
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Eigenvalues and Condition Numbers of Random Matrices
- New Perturbation Bounds for the Unitary Polar Factor
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Accelerated Methods for NonConvex Optimization
- Low-Rank Matrix Completion in the Presence of High Coherence
- Global Optimality in Low-Rank Matrix Optimization
- Finding approximate local minima faster than gradient descent
- Katyusha: the first direct acceleration of stochastic gradient methods
- Deterministic Guarantees for Burer‐Monteiro Factorizations of Smooth Semidefinite Programs
- 1-Bit matrix completion
- Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization
- The non-convex geometry of low-rank matrix optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Faster Subset Selection for Matrices and Applications
- Low-rank matrix completion using alternating minimization
- Optimal Column-Based Low-Rank Matrix Reconstruction
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization