Sharp global convergence guarantees for iterative nonconvex optimization with random data
From MaRDI portal
Publication:6046308
DOI10.1214/22-aos2246arXiv2109.09859MaRDI QIDQ6046308
Christos Thrampoulidis, Ashwin Pananjady, Kabir Aladin Chandrasekher
Publication date: 10 May 2023
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2109.09859
General nonlinear regression (62J02) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Statistical guarantees for the EM algorithm: from population to sample-based analysis
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Mixtures of linear regressions
- Some inequalities for Gaussian processes and applications
- The convex geometry of linear inverse problems
- The landscape of empirical risk for nonconvex losses
- Singularity, misspecification and the convergence rate of EM
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
- Precise statistical analysis of classification accuracies for adversarial training
- Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- Phase retrieval using alternating minimization in a batch setting
- Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval
- Guaranteed Matrix Completion via Non-Convex Factorization
- The Generalized Lasso With Non-Linear Observations
- Selected Works of David Brillinger
- Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization
- Phase Retrieval Using Alternating Minimization
- Fast and Reliable Parameter Estimation from Nonlinear Observations
- Phaseless Recovery Using the Gauss–Newton Method
- Symbol Error Rate Performance of Box-Relaxation Decoders in Massive MIMO
- Statistical and Computational Guarantees for the Baum-Welch Algorithm
- High-Dimensional Probability
- Non-convex Optimization for Machine Learning
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Single-Index Models in the High Signal Regime
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- Does SLOPE outperform bridge regression?
- Max-Affine Regression: Parameter Estimation for Gaussian Designs
- A model of double descent for high-dimensional binary linear classification
- Living on the edge: phase transitions in convex programs with random data
- Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Estimating the Coefficients of a Mixture of Two Linear Regressions by Expectation Maximization
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
- Phase retrieval via randomized Kaczmarz: theoretical guarantees
- The LASSO Risk for Gaussian Matrices
- The Noise-Sensitivity Phase Transition in Compressed Sensing
- Phase Retrieval With Random Gaussian Sensing Vectors by Alternating Projections
- Sharp Time–Data Tradeoffs for Linear Inverse Problems
- Low-rank matrix completion using alternating minimization
- The nonsmooth landscape of phase retrieval
This page was built for publication: Sharp global convergence guarantees for iterative nonconvex optimization with random data