A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
From MaRDI portal
Publication:6655883
DOI10.1007/s10915-024-02748-2MaRDI QIDQ6655883
Publication date: 27 December 2024
Published in: Journal of Scientific Computing (Search for Journal in Brave)
variance reductionmachine learningline searchproximal stochastic methodsstochastic qusi-Newton methods
Numerical optimization and variational techniques (65K10) Nonsmooth analysis (49J52) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- An inexact successive quadratic approximation method for L-1 regularized optimization
- Minimizing finite sums with the stochastic average gradient
- Sample size selection in optimization methods for machine learning
- On the limited memory BFGS method for large scale optimization
- Representations of quasi-Newton matrices and their use in limited memory methods
- Globalized inexact proximal Newton-type methods for nonconvex composite functions
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Inexact successive quadratic approximation for regularized optimization
- Curvature formulas for implicit curves and surfaces
- On the Global Convergence of the BFGS Method for Nonconvex Unconstrained Optimization Problems
- Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization
- Proximal Splitting Methods in Signal Processing
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Adaptive Sampling Strategies for Stochastic Optimization
- First-Order Methods in Optimization
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- An investigation of Newton-Sketch and subsampled Newton methods
- Inexact proximal stochastic second-order methods for nonconvex composite optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Understanding Machine Learning
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Exact and inexact subsampled Newton methods for optimization
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- A modified BFGS method and its global convergence in nonconvex minimization
- A line search based proximal stochastic gradient algorithm with dynamical variance reduction
- Adaptive sampling strategies for risk-averse stochastic optimization with constraints
- The Elements of Statistical Learning
- A structured L-BFGS method and its application to inverse problems
- Constrained and composite optimization via adaptive sampling methods
This page was built for publication: A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search