Boosting for high-dimensional linear models
From MaRDI portal
Publication:2497175
DOI10.1214/009053606000000092zbMath1095.62077arXivmath/0606789OpenAlexW2952563653MaRDI QIDQ2497175
Publication date: 3 August 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0606789
variable selectionovercomplete dictionarysparsitygene expressionLassobinary classificationmatching pursuitweak greedy algorithm
Linear regression; mixed models (62J05) Computational learning theory (68Q32) Applications of statistics to biology and medical sciences; meta analysis (62P10) Linear inference, regression (62J99) Newton-type methods (49M15) Statistical aspects of information-theoretic topics (62B10)
Related Items
Greedy algorithms for prediction, Two-step sparse boosting for high-dimensional longitudinal data with varying coefficients, Using random subspace method for prediction and variable importance assessment in linear regression, A review on instance ranking problems in statistical learning, Conditional sparse boosting for high-dimensional instrumental variable estimation, High-dimensional variable selection, Post-model-selection inference in linear regression models: an integrated review, Boosting techniques for nonlinear time series models, Wavelet-based gradient boosting, Thresholding least-squares inference in high-dimensional regression models, Econometric estimation with high-dimensional moment equalities, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, A Tree-Based Semi-Varying Coefficient Model for the COM-Poisson Distribution, A NONPARAMETRIC ESTIMATOR FOR THE COVARIANCE FUNCTION OF FUNCTIONAL DATA, Splines for Financial Volatility, Tilting Methods for Assessing the Influence of Components in a Classifier, The expectation-maximization approach for Bayesian quantile regression, Sparsity identification for high-dimensional partially linear model with measurement error, Statistical significance in high-dimensional linear models, Regularization in statistics, Boosting kernel-based dimension reduction for jointly propagating spatial variability and parameter uncertainty in long-running flow simulators, Stratified Cox models with time‐varying effects for national kidney transplant patients: A new blockwise steepest ascent method, Greedy Variable Selection for High-Dimensional Cox Models, Reprint of: A forward-backward greedy approach for sparse multiscale learning, Bayesian variable selection and estimation in semiparametric joint models of multivariate longitudinal and survival data, Using predictability to improve matching of urban locations in Philadelphia, Asymptotic properties of bridge estimators in sparse high-dimensional regression models, Large Scale Prediction with Decision Trees, Forward-selected panel data approach for program evaluation, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Gibbs Priors for Bayesian Nonparametric Variable Selection with Weak Learners, Feature Selection by Canonical Correlation Search in High-Dimensional Multiresponse Models With Complex Group Structures, Characterizing \(L_{2}\)Boosting, Boosting algorithms: regularization, prediction and model fitting, Model selection for high-dimensional linear regression with dependent observations, Sparse regression and support recovery with \(\mathbb{L}_2\)-boosting algorithms, Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models, Regression with stagewise minimization on risk function, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, Feature selection in finite mixture of sparse normal linear models in high-dimensional feature space, A sequential approach to feature selection in high-dimensional additive models, On b-bit min-wise hashing for large-scale regression and classification with sparse data, Boosting additive models using component-wise P-splines, Shrinkage and model selection with correlated variables via weighted fusion, Additive prediction and boosting for functional data, Boosting nonlinear additive autoregressive time series, Multinomial logit models with implicit variable selection, Regularization method for predicting an ordinal response using longitudinal high-dimensional genomic data, Model-based boosting in R: a hands-on tutorial using the R package mboost, TESTS OF THE MARTINGALE DIFFERENCE HYPOTHESIS USING BOOSTING AND RBF NEURAL NETWORK APPROXIMATIONS, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Deformation of log-likelihood loss function for multiclass boosting, On the choice and influence of the number of boosting steps for high-dimensional linear Cox-models, Instrumental variables estimation with many weak instruments using regularized JIVE, Stochastic approximation: from statistical origin to big-data, multidisciplinary applications, On the differences between \(L_2\) boosting and the Lasso, Boosting in Cox regression: a comparison between the likelihood-based and the model-based approaches with focus on the R-packages \textit{CoxBoost} and \textit{mboost}, Boosting high dimensional predictive regressions with time varying parameters, Discussion on “Two-Stage Procedures for High-Dimensional Data” by Makoto Aoshima and Kazuyoshi Yata, Subject-specific Bradley–Terry–Luce models with implicit variable selection, Knot selection by boosting techniques, Boosting ridge regression, Variable Selection and Model Choice in Geoadditive Regression Models, Robustified \(L_2\) boosting, Simultaneous selection of variables and smoothing parameters in structured additive regression models, On boosting kernel regression, Asymptotic linear expansion of regularized M-estimators, Detection of differential item functioning in Rasch models by boosting techniques, Variable selection in high-dimensional sparse multiresponse linear regression models, Scalar on network regression via boosting, A forward-backward greedy approach for sparse multiscale learning, A sequential feature selection procedure for high-dimensional Cox proportional hazards model, Adaptive step-length selection in gradient boosting for Gaussian location and scale models, AdaBoost and robust one-bit compressed sensing, General Sparse Boosting: Improving Feature Selection of L2Boosting by Correlation-Based Penalty Family, Iterative selection using orthogonal regression techniques, Optimization by Gradient Boosting
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Arcing classifiers. (With discussion)
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Adaptive prediction and estimation in linear regression with infinitely many parameters.
- Finding predictive gene groups from microarray data
- Least angle regression. (With discussion)
- Process consistency for AdaBoost.
- On the Bayes-risk consistency of regularized boosting methods.
- Weak greedy algorithms
- Boosting with early stopping: convergence and consistency
- Atomic Decomposition by Basis Pursuit
- Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Boosting With theL2Loss
- Matching pursuits with time-frequency dictionaries
- Regularization and Variable Selection Via the Elastic Net