Model selection for high-dimensional linear regression with dependent observations
DOI10.1214/19-AOS1872zbMath1459.62172arXiv1906.07395OpenAlexW3049332904MaRDI QIDQ2215720
Publication date: 14 December 2020
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.07395
time seriesorthogonal greedy algorithmbest \(m\)-term approximationshigh-dimensional Akaike's information criterionsparsity conditions
Asymptotic properties of parametric estimators (62F12) Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Linear regression; mixed models (62J05) Statistical ranking and selection procedures (62F07) Statistical aspects of information-theoretic topics (62B10)
Related Items (4)
Cites Work
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Regularized estimation in sparse high-dimensional time series models
- A stepwise regression method and consistent model selection for high-dimensional sparse linear models
- Cross-validation for selecting a model selection procedure
- Concentration inequalities for non-Lipschitz functions with bounded derivatives of higher order
- Hanson-Wright inequality and sub-Gaussian concentration
- On the convergence of finite linear predictors of stationary processes
- The best \(m\)-term approximation and greedy algorithms
- Asymptotically efficient selection of the order of the model for estimating parameters of a linear process
- Consistent autoregressive spectral estimates
- Metric entropy and sparse linear approximation of \(\ell_q\)-hulls for \(0<q\leq 1\)
- Weak greedy algorithms
- Greedy approximation in convex optimization
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Boosting for high-dimensional linear models
- Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series
- Adaptive Minimax Estimation over Sparse $\ell_q$-Hulls
- Forward Regression for Ultra-High Dimensional Variable Screening
- Extended Bayesian information criteria for model selection with large model spaces
- Greed is Good: Algorithmic Results for Sparse Approximation
- An Asymptotic Result for the Finite Predictor.
- PREDICTION/ESTIMATION WITH SIMPLE LINEAR MODELS: IS IT REALLY THAT SIMPLE?
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Bridging AIC and BIC: A New Criterion for Autoregression
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
This page was built for publication: Model selection for high-dimensional linear regression with dependent observations