The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
From MaRDI portal
Publication:6550964
DOI10.1214/24-aos2353zbMath1539.62293MaRDI QIDQ6550964
Publication date: 5 June 2024
Published in: The Annals of Statistics (Search for Journal in Brave)
adversarial trainingprecise high-dimensional asymptoticsGaussian equivalence propertyrandom features models
Asymptotic properties of parametric estimators (62F12) Robustness and adaptive procedures (parametric inference) (62F35) Learning and adaptive systems in artificial intelligence (68T05) Neural nets and related approaches to inference from stochastic processes (62M45)
Cites Work
- Unnamed Item
- Unnamed Item
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung.
- A random matrix approach to neural networks
- Analysis of a two-layer neural network via displacement convexity
- Precise statistical analysis of classification accuracies for adversarial training
- Surprises in high-dimensional ridgeless least squares interpolation
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- THE SPECTRUM OF RANDOM INNER-PRODUCT KERNEL MATRICES
- Modern Coding Theory
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- A mean field view of the landscape of two-layer neural networks
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- A model of double descent for high-dimensional binary linear classification
- On the Adversarial Robustness of Robust Estimators
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Nonlinear random matrix theory for deep learning
- Deep learning: a statistical viewpoint
- On the robustness to adversarial corruption and to heavy-tailed data of the Stahel–Donoho median of means
- DeepMoM: Robust Deep Learning With Median-of-Means
- Universality Laws for High-Dimensional Learning With Random Features
- Provable tradeoffs in adversarially robust classification
This page was built for publication: The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression