Approximation results for Gradient Descent trained Shallow Neural Networks in $1d$

From MaRDI portal
Publication:6411089

arXiv2209.08399MaRDI QIDQ6411089

Author name not available (Why is that?)

Publication date: 17 September 2022

Abstract: Two aspects of neural networks that have been extensively studied in the recent literature are their function approximation properties and their training by gradient descent methods. The approximation problem seeks accurate approximations with a minimal number of weights. In most of the current literature these weights are fully or partially hand-crafted, showing the capabilities of neural networks but not necessarily their practical performance. In contrast, optimization theory for neural networks heavily relies on an abundance of weights in over-parametrized regimes. This paper balances these two demands and provides an approximation result for shallow networks in 1d with non-convex weight optimization by gradient descent. We consider finite width networks and infinite sample limits, which is the typical setup in approximation theory. Technically, this problem is not over-parametrized, however, some form of redundancy reappears as a loss in approximation rate compared to best possible rates.




Has companion code repository: https://github.com/rustygentile/approx-trained

No records found.








This page was built for publication: Approximation results for Gradient Descent trained Shallow Neural Networks in $1d$

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6411089)