Early stopping for statistical inverse problems via truncated SVD estimation
From MaRDI portal
Publication:1616307
DOI10.1214/18-EJS1482zbMath1403.65025arXiv1710.07278OpenAlexW2963269479MaRDI QIDQ1616307
Marc Hoffmann, Gilles Blanchard, Markus Reiss
Publication date: 1 November 2018
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.07278
discrepancy principleadaptive estimationoracle inequalitieslinear inverse problemstruncated SVDearly stoppingspectral cut-off
Nonparametric regression and quantile regression (62G08) Nonparametric estimation (62G05) Numerical solutions to equations with linear operators (65J10) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20)
Related Items
A probabilistic oracle inequality and quantification of uncertainty of a modified discrepancy principle for statistical inverse problems, Optimal Convergence of the Discrepancy Principle for Polynomially and Exponentially Ill-Posed Operators under White Noise, Mini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022, Unnamed Item, Smoothed residual stopping for statistical inverse problems via truncated SVD estimation, Optimal Adaptation for Early Stopping in Statistical Inverse Problems, Towards adaptivity via a new discrepancy principle for Poisson inverse problems, A modified discrepancy principle to attain optimal convergence rates under unknown noise, From inexact optimization to learning via gradient concentration
Cites Work
- Unnamed Item
- Unnamed Item
- Ordered smoothers with exponential weighting
- Boosting algorithms: regularization, prediction and model fitting
- Risk hull method and regularization by projections of ill-posed inverse problems
- Risk estimators for choosing regularization parameters in ill-posed problems -- properties and limitations
- Adaptive estimation of a quadratic functional by model selection.
- Oracle inequalities for inverse problems
- Nonparametric goodness-of-fit testing under Gaussian models
- On early stopping in gradient descent learning
- Numerical Methods for Large Eigenvalue Problems
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy
- Adaptive Wavelet Galerkin Methods for Linear Inverse Problems
- Optimal Adaptation for Early Stopping in Statistical Inverse Problems
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications