Suboptimality of constrained least squares and improvements via non-linear predictors
From MaRDI portal
Publication:2108490
DOI10.3150/22-BEJ1465MaRDI QIDQ2108490
Tomas Vaškevičius, Nikita Zhivotovskiy
Publication date: 19 December 2022
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2009.09304
empirical processesridge regressionconstrained least squaresaverage stabilityVovk-Azoury-Warmuth forecaster
Linear inference, regression (62Jxx) Artificial intelligence (68Txx) Nonparametric inference (62Gxx)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Performance of empirical risk minimization in linear aggregation
- The lower tail of random quadratic forms with applications to ordinary least squares
- Random design analysis of ridge regression
- A new perspective on least squares under convex constraint
- Empirical entropy, minimax regret and minimax risk
- Probability in Banach spaces. Isoperimetry and processes
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Sums of random Hermitian matrices and an inequality by Rudelson
- Robust linear least squares regression
- Learning by mirror averaging
- Logarithmic regret algorithms for online convex optimization
- Random vectors in the isotropic position
- Rates of convergence for minimum contrast estimators
- Relative expected instantaneous loss bounds
- On optimality of empirical risk minimization in linear aggregation
- Sharp oracle inequalities for least squares estimators in shape restricted regression
- Minimax estimation via wavelet shrinkage
- A distribution-free theory of nonparametric regression
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Distribution-free robust linear regression
- Robust statistical learning with Lipschitz and convex loss functions
- Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method
- Robust covariance estimation under \(L_4\)-\(L_2\) norm equivalence
- Risk minimization by median-of-means tournaments
- Isotonic regression in general dimensions
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Fast learning rates in statistical inference through aggregation
- Empirical risk minimization is optimal for the convex aggregation problem
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- Empirical minimization
- On risk bounds in isotonic and other shape restricted regression problems
- Local Rademacher complexities
- Learning without Concentration
- How Many Variables Should be Entered in a Regression Equation?
- High-Dimensional Statistics
- High-Dimensional Probability
- Competitive On-line Statistics
- Extending the scope of the small-ball method
- Learning Theory and Kernel Machines
- Prediction, Learning, and Games
- Understanding Machine Learning
- An Introduction to Matrix Concentration Inequalities
- Relative loss bounds for on-line density estimation with the exponential family of distributions
This page was built for publication: Suboptimality of constrained least squares and improvements via non-linear predictors