Greedy algorithms for prediction
From MaRDI portal
Publication:265302
DOI10.3150/14-BEJ691zbMath1388.62209arXiv1602.01951OpenAlexW3103732143MaRDI QIDQ265302
Publication date: 1 April 2016
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1602.01951
Density estimation (62G07) Ridge regression; shrinkage estimators (Lasso) (62J07) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05)
Related Items (3)
On the selection of predictors by using greedy algorithms and information theoretic criteria ⋮ ESTIMATION FOR THE PREDICTION OF POINT PROCESSES WITH MANY COVARIATES ⋮ Semiparametric estimation of plane similarities: application to fast computation of aeronautic loads
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Nearly unbiased variable selection under minimax concave penalty
- Statistical significance in high-dimensional linear models
- Degrees of freedom in lasso problems
- Statistics for high-dimensional data. Methods, theory and applications.
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Bootstrap model selection for possibly dependent and heterogeneous data
- Propriétés de mélange des processus autorégressifs polynomiaux. (Mixing properties of polynomial autoregressive processes)
- Basic properties of strong mixing conditions. A survey and some open questions
- A dynamic model of expected bond returns: A functional gradient descent approach
- Mixing properties of ARMA processes
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- A new mixing notion and functional central limit theorems for a sieve bootstrap in time series
- Invariance principles for absolutely regular empirical processes
- Two lower estimates in greedy approximation
- A new weak dependence condition and applications to moment inequalities
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Regular variation of GARCH processes.
- Minimax estimation via wavelet shrinkage
- Maximal inequalities via bracketing with adaptive truncation
- Least angle regression. (With discussion)
- Some remarks on greedy algorithms
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Asymptotic theory of weakly dependent stochastic processes
- Weak greedy algorithms
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Confidence sets in sparse regression
- On the uniform convergence of empirical norms and inner products, with application to causal inference
- Aggregation for Gaussian regression
- Pathwise coordinate optimization
- On the ``degrees of freedom of the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Approximation and learning by greedy algorithms
- Boosting for high-dimensional linear models
- A new covariance inequality and applications.
- Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain
- Splines for Financial Volatility
- Greedy Approximation
- Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation
- Non-strong mixing autoregressive processes
- Risk bounds for mixture density estimation
- A maximal 𝕃_{𝕡}-inequality for stationary sequences and its applications
- Forecasting Time Series Subject to Multiple Structural Breaks
- A NONPARAMETRIC ESTIMATOR FOR THE COVARIANCE FUNCTION OF FUNCTIONAL DATA
- On Measuring and Correcting the Effects of Data Mining and Model Selection
- DATA-DEPENDENT ESTIMATION OF PREDICTION FUNCTIONS
- Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion
- Universal approximation bounds for superpositions of a sigmoidal function
- A cross-validatory method for dependent data
- Boosting With theL2Loss
- Sieve Extremum Estimates for Weakly Dependent Data
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Matching pursuits with time-frequency dictionaries
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Learning Theory and Kernel Machines
- A Recursive Algorithm for Mixture of Densities Estimation
- Forecast Combination Across Estimation Windows
- Introduction to nonparametric estimation
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Gaussian model selection
This page was built for publication: Greedy algorithms for prediction