Infinitesimal gradient boosting
DOI10.1016/j.spa.2024.104310arXiv2104.13208WikidataQ129416662 ScholiaQ129416662MaRDI QIDQ6123287
Jean-Jil Duchamps, Clément Dombry
Publication date: 4 March 2024
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.13208
gradient boostingconvergence of Markov processessoftmax regression treevanishing-learning-rate asymptotic
Applications of statistics to economics (62P20) Nonparametric estimation (62G05) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Functional limit theorems; invariance principles (60F17)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Semimartingales: A course on stochastic processes
- Adaptive game playing using multiplicative weights
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Population theory for boosting ensembles.
- Process consistency for AdaBoost.
- On the Bayes-risk consistency of regularized boosting methods.
- Boosting with early stopping: convergence and consistency
- Boosting With theL2Loss
- 10.1162/1532443041424319
- Uniformity in weak convergence
- On Weak Convergence of Stochastic Processes with Multidimensional Time Parameter
- Optimization by Gradient Boosting
- Extremely randomized trees
- Stochastic gradient boosting.
This page was built for publication: Infinitesimal gradient boosting