Robust statistical learning with Lipschitz and convex loss functions
From MaRDI portal
Publication:2174664
DOI10.1007/s00440-019-00931-3zbMath1436.62178OpenAlexW2954816895WikidataQ127581162 ScholiaQ127581162MaRDI QIDQ2174664
Matthieu Lerasle, Guillaume Lecué, Geoffrey Chinot
Publication date: 21 April 2020
Published in: Probability Theory and Related Fields (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00440-019-00931-3
Nonparametric regression and quantile regression (62G08) Nonparametric robustness (62G35) Learning and adaptive systems in artificial intelligence (68T05) Probability theory on linear topological spaces (60B11)
Related Items (13)
Aggregated hold out for sparse linear regression with a robust loss function ⋮ Concentration study of M-estimators using the influence function ⋮ High-dimensional robust regression with \(L_q\)-loss functions ⋮ Statistical performance of quantile tensor regression with convex regularization ⋮ Robust classification via MOM minimization ⋮ ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels ⋮ Unnamed Item ⋮ Iteratively reweighted \(\ell_1\)-penalized robust regression ⋮ A statistical learning assessment of Huber regression ⋮ Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence ⋮ Total variation regularized Fréchet regression for metric-space valued data ⋮ Suboptimality of constrained least squares and improvements via non-linear predictors ⋮ Distribution-free robust linear regression
Uses Software
Cites Work
- Performance of empirical risk minimization in linear aggregation
- A new method for estimation and model selection: \(\rho\)-estimation
- Sub-Gaussian mean estimators
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Robust linear least squares regression
- Stabilité et instabilité du risque minimax pour des variables indépendantes équidistribuées
- Random generation of combinatorial structures from a uniform distribution
- The space complexity of approximating the frequency moments
- Smooth discrimination analysis
- On optimality of empirical risk minimization in linear aggregation
- Sub-Gaussian estimators of the mean of a random vector
- A new perspective on robust \(M\)-estimation: finite sample theory and applications to dependence-adjusted multiple testing
- Optimal aggregation of classifiers in statistical learning.
- Challenging the empirical mean and empirical variance: a deviation study
- Robust low-rank matrix estimation
- Learning from MOM's principles: Le Cam's approach
- Robust machine learning by median-of-means: theory and practice
- Robust classification via MOM minimization
- Risk minimization by median-of-means tournaments
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Regularization, sparse recovery, and median-of-means tournaments
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Empirical minimization
- Local Rademacher complexities
- Learning without Concentration
- Theory of Classification: a Survey of Some Recent Advances
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- On Multiplier Processes Under Weak Moment Assumptions
- Upper and Lower Bounds for Stochastic Processes
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Robust statistical learning with Lipschitz and convex loss functions