Learning by mirror averaging
From MaRDI portal
Publication:955138
DOI10.1214/07-AOS546zbMath1274.62288arXivmath/0511468OpenAlexW3100353959MaRDI QIDQ955138
Alexandre B. Tsybakov, Anatoli B. Juditsky, Philippe Rigollet
Publication date: 18 November 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0511468
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Nonparametric estimation (62G05) Minimax procedures in statistical decision theory (62C20)
Related Items
On aggregation for heavy-tailed classes, Performance of empirical risk minimization in linear aggregation, Parameter tuning in pointwise adaptation using a propagation approach, Entropic optimal transport is maximum-likelihood deconvolution, Aggregation via empirical risk minimization, Stochastic approximation versus sample average approximation for Wasserstein barycenters, Fast learning rates in statistical inference through aggregation, Unnamed Item, Exponential weights in multivariate regression and a low-rankness favoring prior, Some multivariate risk indicators: Minimization by using a Kiefer–Wolfowitz approach to the mirror stochastic algorithm, Unifying mirror descent and dual averaging, Optimal rates for estimation of two-dimensional totally positive distributions, User-friendly Introduction to PAC-Bayes Bounds, An adaptive multiclass nearest neighbor classifier, Aggregation of estimators and stochastic optimization, Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions, Stochastic online convex optimization. Application to probabilistic time series forecasting, An optimal method for stochastic composite optimization, Estimation of Monge matrices, Penalty methods with stochastic approximation for stochastic nonlinear programming, Empirical risk minimization is optimal for the convex aggregation problem, Sparse regression learning by aggregation and Langevin Monte-Carlo, General oracle inequalities for model selection, Noisy independent factor analysis model for density estimation and classification, PAC-Bayesian bounds for sparse regression estimation with exponential weights, On the optimality of the aggregate with exponential weights for low temperatures, Mirror averaging with sparsity priors, Kullback-Leibler aggregation and misspecified generalized linear models, Estimation and variable selection with exponential weights, Model selection for density estimation with \(\mathbb L_2\)-loss, Optimal learning with \textit{Q}-aggregation, Aggregation for Gaussian regression, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Exponentially concave functions and a new information geometry, Optimal rates of aggregation in classification under low noise assumption, Unnamed Item, Prediction of time series by statistical learning: general losses and fast rates, Deviation optimal learning using greedy \(Q\)-aggregation, Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization, Optimal learning with Bernstein Online Aggregation, Sparse estimation by exponential weighting, A universal procedure for aggregating estimators, Generalized mirror averaging and \(D\)-convex aggregation, On Martingale Extensions of Vapnik–Chervonenkis Theory with Applications to Online Learning, On the exponentially weighted aggregate with the Laplace prior, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Aggregating estimates by convex optimization, Suboptimality of constrained least squares and improvements via non-linear predictors, Distribution-free robust linear regression
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Model selection in nonparametric regression
- Mixing strategies for density estimation.
- Complexity regularization via localized random penalties
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Aggregation for Gaussian regression
- Spatial aggregation of local likelihood estimates with applications to classification
- Universal linear prediction by model order weighting
- Theory of Classification: a Survey of Some Recent Advances
- Information Theory and Mixing Least-Squares Regressions
- Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
- Efficient agnostic learning of neural networks with bounded fan-in
- Sequential prediction of individual sequences under general loss functions
- Competitive On-line Statistics
- Learning Theory and Kernel Machines
- Prediction, Learning, and Games
- Introduction to nonparametric estimation
- Model selection and error estimation