Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization
From MaRDI portal
Publication:2089785
DOI10.1007/s10107-021-01709-zOpenAlexW3201952969MaRDI QIDQ2089785
Publication date: 24 October 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.04357
nonsmooth optimizationvariance reductionsample complexityprox-linear algorithmstochastic composite optimization
Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Randomized algorithms (68W20)
Related Items (4)
Hybrid SGD algorithms to solve stochastic composite optimization problems with application in sparse portfolio selection problems ⋮ Stochastic composition optimization of functions without Lipschitz continuous gradient ⋮ On the complexity of a stochastic Levenberg-Marquardt method ⋮ Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A proximal method for composite minimization
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Approximation procedures based on the method of multipliers
- A Gauss-Newton method for convex composite optimization
- Efficiency of minimizing compositions of convex functions and smooth maps
- Non-smooth non-convex Bregman minimization: unification and new algorithms
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Descent methods for composite nondifferentiable optimization problems
- First and second order conditions for a class of nondifferentiable optimization problems
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Stochastic Model-Based Minimization of Weakly Convex Functions
- MultiLevel Composite Stochastic Optimization via Nested Variance Reduction
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- The importance of better models in stochastic optimization
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems
- Optimal Distributed Online Prediction using Mini-Batches
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
This page was built for publication: Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization