Quantized convolutional neural networks through the lens of partial differential equations
From MaRDI portal
Publication:2079526
DOI10.1007/s40687-022-00354-yOpenAlexW3198630114WikidataQ114218934 ScholiaQ114218934MaRDI QIDQ2079526
Gil Ben Shalom, Moshe Eliasof, Ido Ben-Yair, Eran Treister
Publication date: 30 September 2022
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2109.00095
Artificial neural networks and deep learning (68T07) Pattern recognition, speech recognition (68T10) Machine vision and scene understanding (68T45)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- Translating numerical concepts for PDEs into neural architectures
- Forward stability of ResNet and its variants
- Deep neural networks motivated by partial differential equations
- Designing rotationally invariant neural networks from PDEs and variational methods
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Blended coarse gradient descent for full quantization of deep neural networks
- A proposal on machine learning via dynamical systems
- Determining optical flow
- Approximation of functional depending on jumps by elliptic functional via t-convergence
- Active contours without edges
- Stable architectures for deep neural networks
- Optimization Methods for Large-Scale Machine Learning
- Layer-Parallel Training of Deep Residual Neural Networks