Theory of deep convolutional neural networks: downsampling
From MaRDI portal
Publication:2185717
DOI10.1016/j.neunet.2020.01.018zbMath1434.68532OpenAlexW3002335888WikidataQ89611717 ScholiaQ89611717MaRDI QIDQ2185717
Publication date: 5 June 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2020.01.018
Related Items (25)
Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks ⋮ DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS ⋮ Lagrange-Chebyshev interpolation for image resizing ⋮ Neural network interpolation operators activated by smooth ramp functions ⋮ A deep network construction that adapts to intrinsic dimensionality beyond the domain ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Rates of approximation by ReLU shallow neural networks ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ Approximation error for neural network operators by an averaged modulus of smoothness ⋮ Image scaling by de la Vallée-Poussin filtered interpolation ⋮ Learning sparse and smooth functions by deep sigmoid nets ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Neural network interpolation operators of multivariate functions ⋮ Connections between Operator-Splitting Methods and Deep Neural Networks with Applications in Image Segmentation ⋮ Error bounds for approximations using multichannel deep convolutional neural networks with downsampling ⋮ Deep learning theory of distribution regression with CNNs ⋮ The universal approximation theorem for complex-valued neural networks ⋮ Numerical solution of the parametric diffusion equation by deep neural networks ⋮ Theory of deep convolutional neural networks. II: Spherical analysis ⋮ Rates of approximation by neural network interpolation operators ⋮ Convolutional spectral kernel learning with generalization guarantees ⋮ Approximation of functions from korobov spaces by deep convolutional neural networks ⋮ Error bounds for ReLU networks with depth and width parameters ⋮ Approximating functions with multi-features by deep convolutional neural networks
Uses Software
Cites Work
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- Unregularized online learning algorithms with general loss functions
- Multilayer feedforward networks are universal approximators
- Provable approximation properties for deep neural networks
- Distributed kernel-based gradient descent algorithms
- Approximation properties of a multilayered feedforward artificial neural network
- Limitations of the approximation capabilities of neural networks with one hidden layer
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Adaptive regression estimation with multilayer feedforward neural networks
- Support Vector Machines
- Ten Lectures on Wavelets
- Universal approximation bounds for superpositions of a sigmoidal function
- Deep distributed convolutional neural networks: Universality
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- Thresholded spectral algorithms for sparse approximations
- A Fast Learning Algorithm for Deep Belief Nets
- Approximation by superpositions of a sigmoidal function
- 0n the best approximation by ridge functions in the uniform norm
This page was built for publication: Theory of deep convolutional neural networks: downsampling