On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis
From MaRDI portal
Publication:5107212
DOI10.1137/17M1152474zbMath1461.65189arXiv1710.05509MaRDI QIDQ5107212
Farzad Yousefian, Uday V. Shanbhag, Angelia Nedić
Publication date: 17 April 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.05509
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Stochastic programming (90C15) Methods of quasi-Newton type (90C53)
Related Items
A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, An overview of stochastic quasi-Newton methods for large-scale machine learning, A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Lagrange multipliers, (exact) regularization and error bounds for monotone variational inequalities
- Minimizing finite sums with the stochastic average gradient
- Finite perturbation of convex programs
- On the limited memory BFGS method for large scale optimization
- Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- Global Convergence of Online Limited Memory BFGS
- Self-Tuned Stochastic Approximation Schemes for Non-Lipschitzian Stochastic Multi-User Optimization and Nash Games
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Exact Regularization of Convex Programs
- Robust Stochastic Approximation Approach to Stochastic Programming
- Nonlinear Perturbation of Linear Programs
- RES: Regularized Stochastic BFGS Algorithm
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
- Self-Tuned Mirror Descent Schemes for Smooth and Nonsmooth High-Dimensional Stochastic Optimization
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- A Stochastic Approximation Method
- A modified BFGS method and its global convergence in nonconvex minimization