PAC-Bayesian bounds for sparse regression estimation with exponential weights
From MaRDI portal
Publication:1952177
DOI10.1214/11-EJS601zbMath1274.62463arXiv1009.2707OpenAlexW3123715748MaRDI QIDQ1952177
Publication date: 28 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1009.2707
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Bayesian inference (62F15) Learning and adaptive systems in artificial intelligence (68T05) Statistical aspects of information-theoretic topics (62B10)
Related Items
Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, PAC-Bayesian high dimensional bipartite ranking, General Robust Bayes Pseudo-Posteriors: Exponential Convergence Results with Applications, Multiple Kernel Learningの学習理論, Ordered smoothers with exponential weighting, Comments on: ``On active learning methods for manifold data, Exponential weights in multivariate regression and a low-rankness favoring prior, User-friendly Introduction to PAC-Bayes Bounds, Sharp oracle inequalities for aggregation of affine estimators, Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions, PAC-Bayesian estimation and prediction in sparse additive models, Upper bounds and aggregation in bipartite ranking, Kullback-Leibler aggregation and misspecified generalized linear models, Concentration inequalities for the exponential weighting method, Aggregation of affine estimators, Estimation and variable selection with exponential weights, Optimal learning with \textit{Q}-aggregation, Structured, Sparse Aggregation, Prediction of time series by statistical learning: general losses and fast rates, Estimation from nonlinear observations via convex programming with application to bilinear regression, Exponential screening and optimal rates of sparse estimation, On some recent advances on high dimensional Bayesian statistics, A quasi-Bayesian perspective to online clustering, Sparse estimation by exponential weighting, On the exponentially weighted aggregate with the Laplace prior, A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution, Robust Bayes estimation using the density power divergence
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- Exponential screening and optimal rates of sparse estimation
- The Dantzig selector and sparsity oracle inequalities
- Generalized mirror averaging and \(D\)-convex aggregation
- PAC-Bayesian bounds for randomized empirical risk minimizers
- From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Learning by mirror averaging
- Estimating the dimension of a model
- Aggregating regression procedures to improve performance
- Laplace transform estimates and deviation inequalities
- Least angle regression. (With discussion)
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Aggregated estimators and empirical complexity for least square regression
- On the conditions used to prove oracle results for the Lasso
- Some PAC-Bayesian theorems
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- Information Theory and Mixing Least-Squares Regressions
- A Statistical View of Some Chemometrics Regression Tools
- On Recovery of Sparse Signals Via $\ell _{1}$ Minimization
- Learning Theory and Kernel Machines
- Regularization and Variable Selection Via the Elastic Net
- DINS, a MIP Improvement Heuristic
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Some Comments on C P
- Introduction to nonparametric estimation