A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning
From MaRDI portal
Publication:2044333
DOI10.1214/21-EJS1814zbMath1466.62272arXiv1812.02435MaRDI QIDQ2044333
Publication date: 9 August 2021
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1812.02435
Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Robustness and adaptive procedures (parametric inference) (62F35)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- Estimator selection in the Gaussian setting
- Performance of empirical risk minimization in linear aggregation
- On statistics, computation and scalability
- A new method for estimation and model selection: \(\rho\)-estimation
- Estimator selection with respect to Hellinger-type risks
- Robust linear least squares regression
- Linear and convex aggregation of density estimators
- Risk bounds for statistical learning
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- A survey of cross-validation procedures for model selection
- Universal pointwise selection rule in multivariate function estimation
- Random generation of combinatorial structures from a uniform distribution
- Estimation of the mean of a multivariate normal distribution
- Risk bounds for model selection via penalization
- The space complexity of approximating the frequency moments
- On optimality of empirical risk minimization in linear aggregation
- Optimal bounds for aggregation of affine estimators
- Regularization and the small-ball method. I: Sparse recovery
- A new perspective on robust \(M\)-estimation: finite sample theory and applications to dependence-adjusted multiple testing
- Optimal aggregation of classifiers in statistical learning.
- Slope meets Lasso: improved oracle bounds and optimality
- Learning from MOM's principles: Le Cam's approach
- Robust machine learning by median-of-means: theory and practice
- Risk minimization by median-of-means tournaments
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Learning without Concentration
- Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation
- A split-and-conquer approach for analysis of
- Information Theory and Mixing Least-Squares Regressions
- Ideal spatial adaptation by wavelet shrinkage
- A Scalable Bootstrap for Massive Data
- Learning Theory and Kernel Machines
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- Distributed Matrix Completion and Robust Factorization
- Introduction to nonparametric estimation
- Gaussian model selection
- Randomized maximum-contrast selection: subagging for large-scale regression
This page was built for publication: A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning