On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
From MaRDI portal
Publication:5882452
DOI10.4208/ata.OA-2021-0006MaRDI QIDQ5882452
Feng Jun Li, Dansheng Yu, Yunyou Qian
Publication date: 16 March 2023
Published in: Analysis in Theory and Applications (Search for Journal in Brave)
Approximation by rational functions (41A20) Rate of convergence, degree of approximation (41A25) Approximation by operators (in particular, by integral operators) (41A35)
Related Items (1)
Cites Work
- Error estimates for the modified truncations of approximate approximation with Gaussian kernels
- Approximation by neural networks with sigmoidal functions
- The essential order of approximation for neural networks
- The approximation operators with sigmoidal functions
- Approximation by Ridge functions and neural networks with one hidden layer
- Approximation by superposition of sigmoidal and radial basis functions
- Uniform approximation by neural networks
- Rate of convergence of some neural network operators to the unit-univariate case
- Multilayer feedforward networks are universal approximators
- An approximation by neural networks with a fixed weight
- Degree of approximation by neural and translation networks with a single hidden layer
- Interpolation by neural network operators activated by ramp functions
- Universal approximation bounds for superpositions of a sigmoidal function
- Approximation by superpositions of a sigmoidal function
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights