Stochastic momentum methods for non-convex learning without bounded assumptions
From MaRDI portal
Publication:6057975
DOI10.1016/j.neunet.2023.06.021MaRDI QIDQ6057975
Yuqing Liang, Jinlan Liu, Dong-po Xu
Publication date: 26 October 2023
Published in: Neural Networks (Search for Journal in Brave)
non-convex optimizationmachine learninglast-iterate convergence ratePL conditionstochastic momentum methods
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Lectures on convex optimization
- An adaptive Polyak heavy-ball method
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Gradient Convergence in Gradient methods with Errors
- A New Varying-Parameter Convergent-Differential Neural-Network for Solving Time-Varying Convex QP Problem Constrained by Linear-Equality
- Optimization Methods for Large-Scale Machine Learning
- Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Understanding Machine Learning
- Some methods of speeding up the convergence of iteration methods
- A Stochastic Approximation Method
- Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization
- Convergence of the RMSProp deep learning method with penalty for nonconvex optimization