Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
DOI10.1007/s00245-020-09718-8OpenAlexW3083378728MaRDI QIDQ2234294
Hang-Tuan Nguyen, Tuyen Trung Truong
Publication date: 19 October 2021
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-020-09718-8
global convergencelocal minimumrandom dynamical systemsimage classificationbacktrackinggradient descentdeep neural networkslarge scale optimisationiterative optimisationautomation of learning rates
Artificial intelligence (68Txx) Numerical methods in optimal control (49Mxx) Computing methodologies and applications (68Uxx) Numerical methods for mathematical programming, optimization and variational techniques (65Kxx)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient methods of maximization
- Cauchy's method of minimization
- Introductory lectures on convex optimization. A basic course.
- Optimization and dynamical systems
- Minimization of functions having Lipschitz continuous first partial derivatives
- Numerical Optimization
- Gradient Convergence in Gradient methods with Errors
- Probabilistic Line Searches for Stochastic Optimization
- Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
- Optimization Methods for Large-Scale Machine Learning
- Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Convergence Conditions for Ascent Methods
- Limit Points of Sequences in Metric Spaces
- A Stochastic Approximation Method
- The method of steepest descent for non-linear minimization problems
- Optimization