How do noise tails impact on deep ReLU networks?
From MaRDI portal
Publication:6621550
DOI10.1214/24-aos2428MaRDI QIDQ6621550
Wen-Xin Zhou, Yihong Gu, Jianqing Fan
Publication date: 18 October 2024
Published in: The Annals of Statistics (Search for Journal in Brave)
robustnesstruncationheavy tailscomposition of functionsoptimal ratesapproximablility of ReLU networks
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- Geometrizing rates of convergence. II
- Geometrizing rates of convergence. III
- Multivariate adaptive regression splines
- Maximum likelihood estimates in exponential response models
- Approximation and estimation bounds for artificial neural networks
- Robust regression: Asymptotics, conjectures and Monte Carlo
- A distribution-free theory of nonparametric regression
- Challenging the empirical mean and empirical variance: a deviation study
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- On the rate of convergence of fully connected deep neural network regression estimates
- Optimal approximation rate of ReLU networks in terms of width and depth
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Deep learning for the partially linear Cox model
- Nonparametric regression using deep neural networks with ReLU activation function
- Error bounds for approximations with deep ReLU networks
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Local linear regression smoothers and their minimax efficiencies
- Adaptive Huber Regression
- Robust Locally Weighted Regression and Smoothing Scatterplots
- Design-adaptive Nonparametric Regression
- Universal approximation bounds for superpositions of a sigmoidal function
- The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data
- Deep Neural Networks for Estimation and Inference
- A Tuning-free Robust and Efficient Approach to High-dimensional Regression
- Deep Network Approximation for Smooth Functions
- Deep Network Approximation Characterized by Number of Neurons
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- Robust Estimation of a Location Parameter
- Robust Statistics
- Introduction to nonparametric estimation
- Approximation by superpositions of a sigmoidal function
- Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High Dimensional Regression
This page was built for publication: How do noise tails impact on deep ReLU networks?