Model selection and error estimation

From MaRDI portal
Publication:5959937

DOI10.1023/A:1013999503812zbMath0998.68117WikidataQ58374484 ScholiaQ58374484MaRDI QIDQ5959937

Stéphane Boucheron, Gábor Lugosi, Bartlett, Peter L.

Publication date: 11 April 2002

Published in: Machine Learning (Search for Journal in Brave)




Related Items

Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins., Deep learning: a statistical viewpoint, Complexity regularization via localized random penalties, Optimal aggregation of classifiers in statistical learning., A penalized criterion for variable selection in classification, On robust learning in the canonical change point problem under heavy tailed errors in finite and growing dimensions, Concentration inequalities for non-causal random fields, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), On improved loss estimation for shrinkage estimators, The Loss Rank Criterion for Variable Selection in Linear Regression Analysis, Model selection by bootstrap penalization for classification, Complexity of hyperconcepts, The two-sample problem for Poisson processes: adaptive tests with a nonasymptotic wild bootstrap approach, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, Matrixized learning machine with modified pairwise constraints, Hold-out estimates of prediction models for Markov processes, Model selection in nonparametric regression, Model selection in reinforcement learning, Adaptive estimation of a distribution function and its density in sup-norm loss by wavelet and spline projections, Global uniform risk bounds for wavelet deconvolution estimators, Bootstrap model selection for possibly dependent and heterogeneous data, Optimal model selection in heteroscedastic regression using piecewise polynomial functions, Model selection by resampling penalization, Penalized empirical risk minimization over Besov spaces, Relative deviation learning bounds and generalization with unbounded loss functions, An improved analysis of the Rademacher data-dependent bound using its self bounding property, Generalization ability of fractional polynomial models, On learning multicategory classification with sample queries., Concentration inequalities using the entropy method, A statistician teaches deep learning, Unnamed Item, Unnamed Item, A goodness-of-fit test based on neural network sieve estimators, Learning by mirror averaging, Double-fold localized multiple matrixized learning machine, Model selection with the loss rank principle, A survey of cross-validation procedures for model selection, Empirical minimization, Moment inequalities for functions of independent random variables, Unnamed Item, An empirical study of the complexity and randomness of prediction error sequences, Model selection in utility-maximizing binary prediction, A high-dimensional Wilks phenomenon, Quantization and clustering with Bregman divergences, A local Vapnik-Chervonenkis complexity, Sparse estimation by exponential weighting, Generalized mirror averaging and \(D\)-convex aggregation, Theory of Classification: a Survey of Some Recent Advances, A permutation approach to validation*, FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION, Estimation of the conditional risk in classification: the swapping method, Inference on covariance operators via concentration inequalities: \(k\)-sample tests, classification, and clustering via Rademacher complexities, Rademacher complexity in Neyman-Pearson classification, Local Rademacher complexities, Minimax fast rates for discriminant analysis with errors in variables, Learning in Repeated Auctions, Improved loss estimation for the lasso: a variable selection tool


Uses Software