A deterministic gradient-based approach to avoid saddle points
From MaRDI portal
Publication:6622959
DOI10.1017/s0956792522000316MaRDI QIDQ6622959
Stanley J. Osher, Lisa Maria Kreusser, Bei Wang
Publication date: 23 October 2024
Published in: European Journal of Applied Mathematics (Search for Journal in Brave)
Nonconvex programming, global optimization (90C26) Numerical optimization and variational techniques (65K10)
Cites Work
- Unnamed Item
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- A geometric analysis of phase retrieval
- Exploiting negative curvature in deterministic and stochastic optimization
- First-order methods almost always avoid strict saddle points
- Cubic regularization of Newton method and its global performance
- Learning Deep Architectures for AI
- A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points
- Finding approximate local minima faster than gradient descent
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Learning representations by back-propagating errors
- Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
This page was built for publication: A deterministic gradient-based approach to avoid saddle points