New equivalences between interpolation and SVMs: kernels and structured features
DOI10.1137/23M1568764MaRDI QIDQ6617269
Mark A. Davenport, Vidya Muthukumar, Chiraag Kaushik, Andrew D. McRae
Publication date: 10 October 2024
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Ridge regression; shrinkage estimators (Lasso) (62J07) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Computational learning theory (68Q32) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Random design analysis of ridge regression
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Fast learning rates for plug-in classifiers
- Support-vector networks
- Random embeddings with an almost Gaussian distortion
- Surprises in high-dimensional ridgeless least squares interpolation
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
- Just interpolate: kernel ``ridgeless regression can generalize
- Robust learning and generalization with support vector machines.
- High-Dimensional Probability
- Two Models of Double Descent for Weak Features
- Benign overfitting in linear regression
- Learning Theory and Kernel Machines
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification
This page was built for publication: New equivalences between interpolation and SVMs: kernels and structured features
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6617269)