Large Scale Prediction with Decision Trees
From MaRDI portal
Publication:6154011
DOI10.1080/01621459.2022.2126782arXiv2104.13881OpenAlexW4388630546MaRDI QIDQ6154011
Peter M. Tian, Jason M. Klusowski
Publication date: 19 March 2024
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.13881
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Bagging predictors
- On bagging and nonlinear estimation
- Consistent nonparametric regression. Discussion
- On the boosting ability of top-down decision tree learning algorithms
- CART and best-ortho-basis: a connection
- Multivariate locally weighted least squares regression
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees
- Posterior concentration for Bayesian regression trees and forests
- Minimax optimal rates for Mondrian trees and forests
- Doubly penalized estimation in additive regression with high-dimensional data
- Nonparametric estimation of an additive model with a link function
- Boosting for high-dimensional linear models
- Consistency of random forests
- Model Selection for CART Regression Trees
- Local Properties of k-NN Regression Estimates
- Estimation and Inference of Heterogeneous Treatment Effects using Random Forests
- Impact of subsampling and tree depth on random forests
- AMF: Aggregated Mondrian Forests for Online Learning
- High-Dimensional Classification by Sparse Logistic Regression
- Generalized Additive Modeling with Implicit Variable Selection by Likelihood‐Based Boosting
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Random forests
This page was built for publication: Large Scale Prediction with Decision Trees