Aggregation of estimators and stochastic optimization
From MaRDI portal
Publication:2197367
zbMath1455.62080MaRDI QIDQ2197367
Publication date: 31 August 2020
Published in: Journal de la Société Française de Statistique \& Revue de Statistique Appliquée (Search for Journal in Brave)
Full work available at URL: http://www.numdam.org/item/JSFS_2008__149_1_3_0
Related Items
Uses Software
Cites Work
- Primal-dual subgradient methods for convex problems
- Generalized mirror averaging and \(D\)-convex aggregation
- Linear and convex aggregation of density estimators
- Learning by mirror averaging
- Theory of statistical inference and information. Transl. from the Slovak by the author
- Model selection in nonparametric regression
- Aggregating regression procedures to improve performance
- Combining different procedures for adaptive regression
- Randomized prediction of individual sequences
- Mixing strategies for density estimation.
- Functional aggregation for nonparametric regression.
- Direct estimation of the index coefficient in a single-index model
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- On the Bayes-risk consistency of regularized boosting methods.
- Aggregated estimators and empirical complexity for least square regression
- Boosting a weak learning algorithm by majority
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Density estimation with stagewise optimization of the empirical risk
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Aggregation for Gaussian regression
- Approximation and learning by greedy algorithms
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- Boosting with early stopping: convergence and consistency
- Variational Analysis
- Adaptive Regression by Mixing
- 10.1162/153244304773936108
- Learning Theory and Kernel Machines
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Suboptimality of Penalized Empirical Risk Minimization in Classification
- Sparse Density Estimation with ℓ1 Penalties
- Prediction, Learning, and Games
- A Stochastic Approximation Method
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item