Research on three-step accelerated gradient algorithm in deep learning
From MaRDI portal
Publication:5880102
DOI10.1080/24754269.2020.1846414OpenAlexW3110302316MaRDI QIDQ5880102
Shirong Zhou, Yongqiang Lian, Yin-cai Tang
Publication date: 7 March 2023
Published in: Statistical Theory and Related Fields (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/24754269.2020.1846414
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An optimal method for stochastic composite optimization
- Introductory lectures on convex optimization. A basic course.
- Minimization of functions having Lipschitz continuous first partial derivatives
- Machine Learning
- ggplot2
- Reducing the Dimensionality of Data with Neural Networks
- Some Algorithms for Minimizing a Function of Several Variables
- Learning representations by back-propagating errors
- A Fast Learning Algorithm for Deep Belief Nets
- Some methods of speeding up the convergence of iteration methods
- A logical calculus of the ideas immanent in nervous activity
This page was built for publication: Research on three-step accelerated gradient algorithm in deep learning