LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
From MaRDI portal
Publication:5879118
DOI10.1090/mcom/3802OpenAlexW3170011685MaRDI QIDQ5879118
Daniela di Serafino, Marco Viola, Nataša Krklec Jerinkić, Nataša Krejić
Publication date: 24 February 2023
Published in: Mathematics of Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2007.15966
quasi-Newton methodsline searchalmost-sure convergencestochastic optimization methodsnonconvex finite sums
Numerical mathematical programming methods (65K05) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (3)
An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians ⋮ Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming ⋮ Time discretization in the solution of parabolic PDEs with ANNs
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Sample size selection in optimization methods for machine learning
- A Newton-Raphson version of the multivariate Robbins-Monro procedure
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- A gradient-based globalization strategy for the Newton method
- Using gradient directions to get global convergence of Newton-type methods
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- Global Convergence of Online Limited Memory BFGS
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Descent direction method with line search for unconstrained optimization in noisy environment
- RES: Regularized Stochastic BFGS Algorithm
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities
- Optimization Methods for Large-Scale Machine Learning
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- A robust multi-batch L-BFGS method for machine learning
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- A fully stochastic second-order trust region method
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- A Stochastic Line Search Method with Expected Complexity Analysis
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Probability theory. A comprehensive course
This page was built for publication: LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums