An elementary analysis of ridge regression with random design
From MaRDI portal
Publication:2080945
DOI10.5802/crmath.367OpenAlexW4297998635MaRDI QIDQ2080945
Lorenzo Rosasco, Jaouad Mourtada
Publication date: 12 October 2022
Published in: Comptes Rendus. Mathématique. Académie des Sciences, Paris (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2203.08564
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Random matrices (probabilistic aspects) (60B20)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Performance of empirical risk minimization in linear aggregation
- Nonparametric stochastic approximation with large step-sizes
- The lower tail of random quadratic forms with applications to ordinary least squares
- Random design analysis of ridge regression
- Concentration inequalities and moment bounds for sample covariance operators
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Sums of random Hermitian matrices and an inequality by Rudelson
- Optimal rates for regularization of statistical inverse learning problems
- User-friendly tail bounds for sums of random matrices
- Model selection for regularized least-squares algorithm in learning theory
- Non commutative Khintchine and Paley inequalities
- Random vectors in the isotropic position
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Distribution-free robust linear regression
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling. II: Connections to learning theory
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Support Vector Machines
- Strong converse for identification via quantum channels
- 10.1162/1532443041424337
- Benign overfitting in linear regression
- Theory of Reproducing Kernels
- Some applications of concentration inequalities to statistics
This page was built for publication: An elementary analysis of ridge regression with random design