Stochastic Markov gradient descent and training low-bit neural networks
From MaRDI portal
Publication:2073135
DOI10.1007/s43670-021-00015-1OpenAlexW3206635749MaRDI QIDQ2073135
Jonathan Ashbrock, Alexander M. Powell
Publication date: 27 January 2022
Published in: Sampling Theory, Signal Processing, and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.11117
neural networksquantizationstochastic gradient descentlow-memory trainingstochastic Markov gradient descent
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Universality of deep convolutional neural networks
- Blended coarse gradient descent for full quantization of deep neural networks
- Least-Squares Halftoning via Human Vision System and Markov Gradient Descent (LS-MGD): Algorithm and Analysis
- Deep distributed convolutional neural networks: Universality
- Lipschitz properties for deep convolutional networks
- Deep Network Approximation Characterized by Number of Neurons
- Approximation by superpositions of a sigmoidal function
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: Stochastic Markov gradient descent and training low-bit neural networks