Optimal rates for regularization of statistical inverse learning problems
DOI10.1007/s10208-017-9359-7zbMath1412.62042arXiv1604.04054OpenAlexW2963053844MaRDI QIDQ667648
Nicole Mücke, Gilles Blanchard
Publication date: 1 March 2019
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1604.04054
inverse problemreproducing kernel Hilbert spaceminimax convergence ratesstatistical learningspectral regularization
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Computational learning theory (68Q32) Numerical solution to inverse problems in abstract spaces (65J22) Linear operators in reproducing-kernel Hilbert spaces (including de Branges, de Branges-Rovnyak, and other structured spaces) (47B32)
Related Items (37)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Inverse statistical learning
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Regularization in kernel learning
- On regularization algorithms in learning theory
- A distribution-free theory of nonparametric regression
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Minimax fast rates for discriminant analysis with errors in variables
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Approximation in learning theory
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Frechet derivatives of the power function
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Convergence Characteristics of Methods of Regularization Estimators for Nonlinear Operator Equations
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Support Vector Machines
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Geometry of linear ill-posed problems in variable Hilbert scales
- Boosting With theL2Loss
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Introduction to nonparametric estimation
This page was built for publication: Optimal rates for regularization of statistical inverse learning problems