Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Boosting with early stopping: convergence and consistency - MaRDI portal

Boosting with early stopping: convergence and consistency

From MaRDI portal
Publication:2583412

DOI10.1214/009053605000000255zbMath1078.62038arXivmath/0508276OpenAlexW3098897816WikidataQ56169183 ScholiaQ56169183MaRDI QIDQ2583412

Bin Yu, Tong Zhang

Publication date: 16 January 2006

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0508276



Related Items

AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories, Coupling the reduced-order model and the generative model for an importance sampling estimator, Deep learning: a statistical viewpoint, Consistency and generalization bounds for maximum entropy density estimation, Tweedie gradient boosting for extremely unbalanced zero-inflated data, Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting), Population theory for boosting ensembles., Nonparametric stochastic approximation with large step-sizes, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, Regularization in statistics, Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Explainable subgradient tree boosting for prescriptive analytics in operations management, Infinitesimal gradient boosting, Unnamed Item, Accelerated gradient boosting, Deep learning for natural language processing: a survey, Aggregation of estimators and stochastic optimization, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Unbiased Boosting Estimation for Censored Survival Data, Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies, Boosting algorithms: regularization, prediction and model fitting, A boosting method for maximization of the area under the ROC curve, Random classification noise defeats all convex potential boosters, Fully corrective boosting with arbitrary loss and regularization, Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models, Variational networks: an optimal control approach to early stopping variational methods for image restoration, Analysis of boosting algorithms using the smooth margin function, Supervised projection approach for boosting classifiers, Boosting for high-dimensional linear models, Bi-cross-validation for factor analysis, Deformation of log-likelihood loss function for multiclass boosting, Stochastic boosting algorithms, Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Randomized Gradient Boosting Machine, Boosted nonparametric hazards with time-dependent covariates, Random gradient boosting for predicting conditional quantiles, Fast and strong convergence of online learning algorithms, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, Dimension reduction boosting, A stochastic approximation view of boosting, SVM-boosting based on Markov resampling: theory and algorithm, On boosting kernel regression, Unnamed Item, Unnamed Item, Stochastic boosting algorithms, Adaptive step-length selection in gradient boosting for Gaussian location and scale models, A boosting inspired personalized threshold method for sepsis screening, AdaBoost and robust one-bit compressed sensing, Complexities of convex combinations and bounding the generalization error in classification, A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\), Optimization by Gradient Boosting, Implicit regularization with strongly convex bias: Stability and acceleration


Uses Software


Cites Work