Pages that link to "Item:Q2425178"
From MaRDI portal
The following pages link to Behavior of accelerated gradient methods near critical points of nonconvex functions (Q2425178):
Displaying 13 items.
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition (Q2173914) (← links)
- Approximating the nearest stable discrete-time system (Q2419035) (← links)
- A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points (Q4620423) (← links)
- Finding the Nearest Positive-Real System (Q4637764) (← links)
- Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration (Q5058053) (← links)
- Second-Order Guarantees of Distributed Gradient Algorithms (Q5131964) (← links)
- A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima (Q5853567) (← links)
- Generalized Momentum-Based Methods: A Hamiltonian Perspective (Q5857293) (← links)
- Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients (Q6071885) (← links)
- Switched diffusion processes for non-convex optimization and saddle points search (Q6089196) (← links)
- Inertial Newton algorithms avoiding strict saddle points (Q6145046) (← links)
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization (Q6158001) (← links)
- Gradient descent provably escapes saddle points in the training of shallow ReLU networks (Q6655804) (← links)