Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization
From MaRDI portal
Publication:6094092
DOI10.1080/10618600.2022.2116446arXiv2110.03513OpenAlexW3203502408MaRDI QIDQ6094092
David Rügamer, Unnamed Author, Bernd Bischl
Publication date: 9 October 2023
Published in: Journal of Computational and Graphical Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2110.03513
Related Items (1)
Cites Work
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization
- Boosting algorithms: regularization, prediction and model fitting
- Multilevel structured additive regression
- Flexible smoothing with \(B\)-splines and penalties. With comments and a rejoinder by the authors
- Probing for sparse and fast variable selection with model-based boosting
- Process consistency for AdaBoost.
- Faster model matrix crossproducts for large generalized linear models with discretized covariates
- Inference for \(L_2\)-boosting
- Sparse matrix test problems
- Boosting With theL2Loss
This page was built for publication: Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization