Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
DOI10.1214/20-EJS1735zbMath1473.65065arXiv1902.05404OpenAlexW3048309030MaRDI QIDQ2192321
Peter Mathé, Gilles Blanchard, Abhishake Rastogi
Publication date: 17 August 2020
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1902.05404
Tikhonov regularizationreproducing kernel Hilbert spaceminimax convergence ratesstatistical inverse problemgeneral source condition
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Numerical solutions to equations with nonlinear operators (65J15) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20) Numerical solution to inverse problems in abstract spaces (65J22) Nonlinear algebraic or transcendental equations (65Hxx)
Related Items (4)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Oracle-type posterior contraction rates in Bayesian inverse problems
- Regularization methods in Banach spaces.
- Optimal rates for regularization of statistical inverse learning problems
- Theory of reproducing kernels and applications
- On regularization algorithms in learning theory
- Methodology and convergence rates for functional linear regression
- Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise
- Optimal learning rates for kernel partial least squares
- Functional linear model
- A convergence analysis of the Landweber iteration for nonlinear ill-posed problems
- Learning sets with separating kernels
- Balancing principle in supervised learning for a general regularization scheme
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Shannon sampling. II: Connections to learning theory
- Penalized estimators for non linear inverse problems
- Convergence Characteristics of Methods of Regularization Estimators for Nonlinear Operator Equations
- Support Vector Machines
- Iteratively Regularized Gauss–Newton Method for Nonlinear Inverse Problems with Random Noise
- Remarks on Inequalities for Large Deviation Probabilities
- Optimal a Posteriori Parameter Choice for Tikhonov Regularization for Solving Nonlinear Ill-Posed Problems
- Geometry of linear ill-posed problems in variable Hilbert scales
- On the nature of ill-posedness of an inverse problem arising in option pricing
- Convergence analysis of (statistical) inverse problems under conditional stability estimates
- Bayesian inverse problems with non-commuting operators
- Understanding Machine Learning
- On Learning Vector-Valued Functions
- Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise
- Minimax theory for a class of nonlinear statistical inverse problems
- Inverse problems for partial differential equations
This page was built for publication: Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems