Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems
From MaRDI portal
Publication:5231692
DOI10.1137/17M1151031zbMath1431.65084arXiv1707.03505WikidataQ127492622 ScholiaQ127492622MaRDI QIDQ5231692
Publication date: 27 August 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1707.03505
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Stochastic programming (90C15)
Related Items (max. 100)
A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods ⋮ Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization ⋮ Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning ⋮ An online conjugate gradient algorithm for large-scale data analysis in machine learning ⋮ Graphical Convergence of Subgradients in Nonconvex Optimization and Learning ⋮ Stochastic AUC optimization with general loss ⋮ The landscape of the proximal point method for nonconvex-nonconcave minimax optimization ⋮ Unnamed Item ⋮ Stochastic Model-Based Minimization of Weakly Convex Functions ⋮ A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization ⋮ A zeroth order method for stochastic weakly convex optimization ⋮ Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization ⋮ Generalized Momentum-Based Methods: A Hamiltonian Perspective ⋮ On strongly quasiconvex functions: existence results and proximal point algorithms ⋮ A hybrid stochastic optimization framework for composite nonconvex optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A proximal method for composite minimization
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Solution of nonconvex nonsmooth stochastic optimization problems
- Computing proximal points of nonconvex functions
- Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
- A Gauss-Newton method for convex composite optimization
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- Phase retrieval from very few measurements
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization
- Exact Recovery in the Stochastic Block Model
- A Linearization Method for Nonsmooth Stochastic Programming Problems
- A Redistributed Proximal Bundle Method for Nonconvex Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Descent methods for composite nondifferentiable optimization problems
- Second order necessary and sufficient conditions for convex composite NDO
- A model algorithm for composite nondifferentiable optimization problems
- Monotone Operators and the Proximal Point Algorithm
- Variational Analysis
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method
- The nonsmooth landscape of phase retrieval
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems