Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
From MaRDI portal
Publication:2134105
DOI10.1016/j.acha.2021.12.003OpenAlexW4200337531MaRDI QIDQ2134105
Publication date: 6 May 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2101.10588
Related Items (8)
Deep learning: a statistical viewpoint ⋮ HARFE: hard-ridge random feature expansion ⋮ A note on the prediction error of principal component regression in high dimensions ⋮ Adversarial examples in random neural networks with general activations ⋮ On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions ⋮ A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors ⋮ Dense Hebbian neural networks: a replica symmetric picture of supervised learning ⋮ The interpolation phase transition in neural networks: memorization and generalization under lazy training
Uses Software
Cites Work
- Kernels as features: on kernels, margins, and low-dimensional mappings
- Inequalities in Fourier analysis
- Linearized two-layers neural networks in high dimension
- Gradient descent optimizes over-parameterized deep ReLU networks
- Just interpolate: kernel ``ridgeless regression can generalize
- Optimal rates for the regularized least-squares algorithm
- Étude des coefficients de Fourier des fonctions de \(L^ p(G)\)
- Sobolev inequalities, the Poisson semigroup, and analysis on the sphere Sn.
- Logarithmic Sobolev Inequalities
- High-Dimensional Statistics
- When do neural networks outperform kernel methods?*
- Benign overfitting in linear regression
- Analysis of Boolean Functions
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- Perturbation bounds in connection with singular value decomposition
- Unnamed Item
- Unnamed Item
This page was built for publication: Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration