Statistical guarantees for regularized neural networks
From MaRDI portal
Publication:6079063
DOI10.1016/j.neunet.2021.04.034zbMath1521.68202arXiv2006.00294OpenAlexW3159120966MaRDI QIDQ6079063
Mahsa Taheri, Johannes Lederer, Fang Xie
Publication date: 28 September 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.00294
Related Items (2)
Function approximation by deep neural networks with parameters \(\{0, \pm \frac{1}{2}, \pm 1,2\}\) ⋮ Nonparametric regression with modified ReLU networks
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- The Bernstein-Orlicz norm and deviation inequalities
- New concentration inequalities for suprema of empirical processes
- On the prediction performance of the Lasso
- Oracle inequalities for high-dimensional prediction
- Weak convergence and empirical processes. With applications to statistics
- Nonparametric regression using deep neural networks with ReLU activation function
- Error bounds for approximations with deep ReLU networks
- How Correlations Influence Lasso Prediction
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- High-Dimensional Probability
- 10.1162/153244303321897690
- Stable signal recovery from incomplete and inaccurate measurements
- Compressed sensing
This page was built for publication: Statistical guarantees for regularized neural networks