scientific article; zbMATH DE number 7049765
From MaRDI portal
Publication:4633060
zbMath1484.68189MaRDI QIDQ4633060
Yunwen Lei, Ding-Xuan Zhou, Shao-Bo Lin
Publication date: 2 May 2019
Full work available at URL: http://jmlr.csail.mit.edu/papers/v20/18-063.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Learning and adaptive systems in artificial intelligence (68T05) Optimal stopping in statistics (62L15)
Related Items (3)
Unnamed Item ⋮ Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping ⋮ SVM-boosting based on Markov resampling: theory and algorithm
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- \(L_{2}\) boosting in kernel regression
- On regularization algorithms in learning theory
- On the choice of the regularization parameter for iterated Tikhonov regularization of ill-posed problems
- Nonstationary iterated Tikhonov regularization
- Optimal learning rates for kernel partial least squares
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Boosting a weak learning algorithm by majority
- An extension of Mercer theorem to matrix-valued measurable kernels
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces
- Learning Theory
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Approximation of generalized inverses by iterated regularization
- On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems
- Boosting With theL2Loss
- Shannon sampling and function reconstruction from point values
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
This page was built for publication: