Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
From MaRDI portal
Publication:3386994
DOI10.1137/19M1256038zbMath1456.62089MaRDI QIDQ3386994
Sabrina Guastavino, Federico Benvenuto
Publication date: 12 January 2021
Published in: SIAM Journal on Numerical Analysis (Search for Journal in Brave)
convergence ratesspectral regularizationlinear ill-posed inverse problemsstatistical kernel learning
Asymptotic properties of nonparametric inference (62G20) Computational learning theory (68Q32) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20)
Related Items (2)
Moving least squares approximation using variably scaled discontinuous weight function ⋮ Learning with partition of unity-based kriging estimators
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Regularization theory for ill-posed problems. Selected topics
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Optimal rates for regularization of statistical inverse learning problems
- Regularization in kernel learning
- On regularization algorithms in learning theory
- Higher order convergence rates for Bregman iterated variational regularization of inverse problems
- Statistical and computational inverse problems.
- A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery
- Optimal rates for the regularized least-squares algorithm
- Penalty-based smoothness conditions in convex variational regularization
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Optimal Convergence Rates Results for Linear Inverse Problems in Hilbert Spaces
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Regularization of some linear ill-posed problems with discretized random noisy data
- Learning Theory
- Support Vector Machines
- Analysis of Profile Functions for General Linear Regularization Methods
- Spectral Algorithms for Supervised Learning
- On Convergence of Kernel Learning Estimators
- Geometry of linear ill-posed problems in variable Hilbert scales
- Discretization strategy for linear ill-posed problems in variable Hilbert scales
- Solving ill-posed inverse problems using iterative deep neural networks
- Statistical Inverse Estimation in Hilbert Scales
- Computational Methods for Inverse Problems
- Convergence analysis of (statistical) inverse problems under conditional stability estimates
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- How general are general source conditions?
- Convergence rates for Tikhonov regularization from different kinds of smoothness conditions
- Theory of Reproducing Kernels
- Linear integral equations
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning