IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
From MaRDI portal
Publication:5745078
DOI10.1137/17M1122943zbMath1401.90121arXiv1702.00709OpenAlexW2804140211MaRDI QIDQ5745078
Alejandro Ribeiro, M. Eisen, Aryan Mokhtari
Publication date: 5 June 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1702.00709
stochastic optimizationlarge-scale optimizationquasi-Newton methodssuperlinear convergenceincremental methods
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Methods of reduced gradient type (90C52)
Related Items (15)
Rates of superlinear convergence for classical quasi-Newton methods ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ Unnamed Item ⋮ Accelerating incremental gradient optimization with curvature information ⋮ Non-asymptotic superlinear convergence of standard quasi-Newton methods ⋮ A framework for parallel second order incremental optimization algorithms for solving partially separable problems ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis ⋮ A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning ⋮ Greedy PSB methods with explicit superlinear convergence ⋮ Stochastic proximal quasi-Newton methods for non-convex composite optimization ⋮ New results on superlinear convergence of classical quasi-Newton methods ⋮ Greedy Quasi-Newton Methods with Explicit Superlinear Convergence ⋮ Unnamed Item ⋮ LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
Uses Software
Cites Work
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Minimizing finite sums with the stochastic average gradient
- Introductory lectures on convex optimization. A basic course.
- Global Convergence of Online Limited Memory BFGS
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals
- Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis
- Ergodic Stochastic Optimization Algorithms for Wireless Communication and Networking
- RES: Regularized Stochastic BFGS Algorithm
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
- On the Local and Superlinear Convergence of Quasi-Newton Methods
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- A Convergent Incremental Gradient Method with a Constant Step Size
- On‐line learning for very large data sets
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
This page was built for publication: IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate