Growing axons: greedy learning of neural networks with application to function approximation
From MaRDI portal
Publication:2689211
DOI10.1515/RNAM-2023-0001OpenAlexW2981405625MaRDI QIDQ2689211
Daria Fokina, Ivan V. Oseledets
Publication date: 9 March 2023
Published in: Russian Journal of Numerical Analysis and Mathematical Modelling (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.12686
Algorithms for approximation of functions (65D15) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20)
Related Items (1)
Cites Work
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Approximation and estimation bounds for artificial neural networks
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Gradient descent optimizes over-parameterized deep ReLU networks
- Error bounds for approximations with deep ReLU networks
- Approximation and learning by greedy algorithms
- Deep ReLU networks and high-order finite element methods
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Growing axons: greedy learning of neural networks with application to function approximation