Boosting with early stopping: convergence and consistency
From MaRDI portal
Publication:2583412
DOI10.1214/009053605000000255zbMath1078.62038arXivmath/0508276OpenAlexW3098897816WikidataQ56169183 ScholiaQ56169183MaRDI QIDQ2583412
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0508276
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Nonparametric estimation (62G05) Stopping times; optimal stopping problems; gambling theory (60G40)
Related Items
AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories, Coupling the reduced-order model and the generative model for an importance sampling estimator, Deep learning: a statistical viewpoint, Consistency and generalization bounds for maximum entropy density estimation, Tweedie gradient boosting for extremely unbalanced zero-inflated data, Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting), Population theory for boosting ensembles., Nonparametric stochastic approximation with large step-sizes, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, Regularization in statistics, Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Explainable subgradient tree boosting for prescriptive analytics in operations management, Infinitesimal gradient boosting, Unnamed Item, Accelerated gradient boosting, Deep learning for natural language processing: a survey, Aggregation of estimators and stochastic optimization, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Unbiased Boosting Estimation for Censored Survival Data, Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies, Boosting algorithms: regularization, prediction and model fitting, A boosting method for maximization of the area under the ROC curve, Random classification noise defeats all convex potential boosters, Fully corrective boosting with arbitrary loss and regularization, Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models, Variational networks: an optimal control approach to early stopping variational methods for image restoration, Analysis of boosting algorithms using the smooth margin function, Supervised projection approach for boosting classifiers, Boosting for high-dimensional linear models, Bi-cross-validation for factor analysis, Deformation of log-likelihood loss function for multiclass boosting, Stochastic boosting algorithms, Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Randomized Gradient Boosting Machine, Boosted nonparametric hazards with time-dependent covariates, Random gradient boosting for predicting conditional quantiles, Fast and strong convergence of online learning algorithms, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, Dimension reduction boosting, A stochastic approximation view of boosting, SVM-boosting based on Markov resampling: theory and algorithm, On boosting kernel regression, Unnamed Item, Unnamed Item, Stochastic boosting algorithms, Adaptive step-length selection in gradient boosting for Gaussian location and scale models, A boosting inspired personalized threshold method for sepsis screening, AdaBoost and robust one-bit compressed sensing, Complexities of convex combinations and bounding the generalization error in classification, A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\), Optimization by Gradient Boosting, Implicit regularization with strongly convex bias: Stability and acceleration
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- A decision-theoretic generalization of on-line learning and an application to boosting
- Arcing classifiers. (With discussion)
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Population theory for boosting ensembles.
- Process consistency for AdaBoost.
- On the Bayes-risk consistency of regularized boosting methods.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Weak convergence and empirical processes. With applications to statistics
- Improved boosting algorithms using confidence-rated predictions
- Complexities of convex combinations and bounding the generalization error in classification
- Local Rademacher complexities
- Universal approximation bounds for superpositions of a sigmoidal function
- Efficient agnostic learning of neural networks with bounded fan-in
- Boosting With theL2Loss
- Sequential greedy approximation for certain convex optimization problems
- 10.1162/1532443041424300
- 10.1162/1532443041424319
- 10.1162/153244303321897690
- 10.1162/153244304773936108
- Matching pursuits with time-frequency dictionaries
- Convexity, Classification, and Risk Bounds
- The elements of statistical learning. Data mining, inference, and prediction
- Logistic regression, AdaBoost and Bregman distances