Surprises in high-dimensional ridgeless least squares interpolation

From MaRDI portal
Publication:2131262

DOI10.1214/21-AOS2133zbMath1486.62202arXiv1903.08560OpenAlexW2923764619MaRDI QIDQ2131262

Yanyan Li

Publication date: 25 April 2022

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1903.08560




Related Items

Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural NetworksDouble Double Descent: On Generalization Errors in Transfer Learning between Linear Regression TasksDeep learning: a statistical viewpointFit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationLearning curves of generic features maps for realistic datasets with a teacher-student model*Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*On the proliferation of support vectors in high dimensions*A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersBenign overfitting in linear regressionTrading Signals in VIX FuturesOverparameterization and Generalization Error: Weighted Trigonometric InterpolationBenefit of Interpolation in Nearest Neighbor AlgorithmsRidge-type linear shrinkage estimation of the mean matrix of a high-dimensional normal distributionHARFE: hard-ridge random feature expansionHigh dimensional binary classification under label shift: phase transition and regularizationLarge-dimensional random matrix theory and its applications in deep learning and wireless communicationsOn the Inconsistency of Kernel Ridgeless Regression in Fixed DimensionsCross-Trait Prediction Accuracy of Summary Statistics in Genome-Wide Association StudiesFree dynamics of feature learning processesA Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear PredictorsMini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022Bayesian Conjugacy in Probit, Tobit, Multinomial Probit and Extensions: A Review and New ResultsSmoothly varying regularizationRandom neural networks in the infinite width limit as Gaussian processesStability of the scattering transform for deformations with minimal regularityUniversality of approximate message passing with semirandom matricesHigh-Dimensional Analysis of Double Descent for Linear Regression with Random ProjectionsA Generalization Gap Estimation for Overparameterized Models via the Langevin Functional VarianceUniversality of regularized regression estimators in high dimensionsTraining-conditional coverage for distribution-free predictive inferenceBenign Overfitting and Noisy FeaturesUnnamed ItemUnnamed ItemUnnamed ItemUnnamed ItemUnnamed ItemDimension independent excess risk by stochastic gradient descentPrecise statistical analysis of classification accuracies for adversarial trainingOn the robustness of minimum norm interpolators and regularized empirical risk minimizersAdaBoost and robust one-bit compressed sensingA Unifying Tutorial on Approximate Message PassingA phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networksThe interpolation phase transition in neural networks: memorization and generalization under lazy trainingPrediction, Estimation, and AttributionFor interpolating kernel machines, minimizing the norm of the ERM solution maximizes stabilityPrediction errors for penalized regressions based on generalized approximate message passing


Uses Software


Cites Work