A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
DOI10.1007/s10107-021-01629-yzbMath1494.90061arXiv1910.09373OpenAlexW3138544340MaRDI QIDQ2149551
Publication date: 29 June 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.09373
global convergencestochastic approximationnonsmooth stochastic optimizationstochastic higher order methodstochastic quasi-Newton scheme
Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Methods of quasi-Newton type (90C53)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Gradient methods for minimizing composite functions
- Minimizing finite sums with the stochastic average gradient
- An extragradient-based alternating direction method for convex minimization
- The subgradient extragradient method for solving variational inequalities in Hilbert space
- A globally and superlinearly convergent quasi-Newton method for general box constrained variational inequalities without smoothing approximation
- On the limited memory BFGS method for large scale optimization
- An extension of Luque's growth condition
- Error bounds and convergence analysis of feasible descent methods: A general approach
- A parameterized Newton method and a quasi-Newton method for nonsmooth equations
- On superlinear convergence of quasi-Newton methods for nonsmooth equations
- A regularized semi-smooth Newton method with projection steps for composite convex programs
- An SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix
- Extragradient method in optimization: convergence and complexity
- Sub-sampled Newton methods
- Finite-sum smooth optimization with SARAH
- Newton-type methods for non-convex optimization under inexact Hessian information
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- Forward-backward quasi-Newton methods for nonsmooth optimization problems
- A nonsmooth version of Newton's method
- Exact matrix completion via convex optimization
- Global Convergence of Online Limited Memory BFGS
- Proximal Splitting Methods in Signal Processing
- Optimization with Sparsity-Inducing Penalties
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Nonsmooth Equations: Motivation and Algorithms
- Deep Learning: Methods and Applications
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Updating Quasi-Newton Matrices with Limited Storage
- Newton and Quasi-Newton Methods for a Class of Nonsmooth Equations and Related Problems
- Trust Region Methods
- Probability
- RES: Regularized Stochastic BFGS Algorithm
- Forward-Backward Envelope for the Sum of Two Nonconvex Functions: Further Properties and Nonmonotone Linesearch Algorithms
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
- Optimization Methods for Large-Scale Machine Learning
- Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations
- Katyusha: the first direct acceleration of stochastic gradient methods
- An investigation of Newton-Sketch and subsampled Newton methods
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Signal Recovery by Proximal Forward-Backward Splitting
- Understanding Machine Learning
- Proximité et dualité dans un espace hilbertien
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- Convex analysis and monotone operator theory in Hilbert spaces
- The elements of statistical learning. Data mining, inference, and prediction
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization