Empirical risk minimization as parameter choice rule for general linear regularization methods
DOI10.1214/19-AIHP966zbMath1439.62096arXiv1703.07809OpenAlexW3004886965MaRDI QIDQ2179243
Publication date: 12 May 2020
Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1703.07809
exponential boundsregularization methodoracle inequalitystatistical inverse problema posteriori parameter choice ruleorder optimalityfilter-based inversion
Asymptotic properties of nonparametric inference (62G20) Nonparametric estimation (62G05) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20) Numerical solution to inverse problems in abstract spaces (65J22)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk hull method and regularization by projections of ill-posed inverse problems
- Comparing parameter choice methods for regularization of ill-posed problems
- A statistical perspective on ill-posed inverse problems (with discussion)
- On universal oracle inequalities related to high-dimensional linear models
- Minimax signal detection in ill-posed inverse problems
- The principle of penalized empirical risk in severely ill-posed problems
- Bandwidth choice for nonparametric regression
- Asymptotic optimality for \(C_ p\), \(C_ L\), cross-validation and generalized cross-validation: Discrete index set
- Optimal filtering of square-integrable signals in Gaussian noise
- Estimation of the mean of a multivariate normal distribution
- Discretization effects in statistical inverse problems
- Asymptotic optimality of generalized cross-validation for choosing the regularization parameter
- Ordered linear smoothers
- A statistical approach to some inverse problems for partial differential equations
- Bounds on the prediction error of penalized least squares estimators with convex penalty
- Risk estimators for choosing regularization parameters in ill-posed problems -- properties and limitations
- Improved estimates of statistical regularization parameters in Fourier differentiation and smoothing
- Oracle inequalities for inverse problems
- Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition
- Adaptive spectral regularizations of high dimensional linear models
- On pointwise adaptive nonparametric deconvolution
- On convergence rates for iteratively regularized Newton-type methods under a Lipschitz-type nonlinearity condition
- Spectral cut-off regularizations for ill-posed linear models
- Signal detection for inverse problems in a multidimensional framework
- Minimal penalties for Gaussian model selection
- Minimax rates for statistical inverse problems under general source conditions
- Optimal Discretization of Inverse Problems in Hilbert Scales. Regularization and Self-Regularization of Projection Methods
- SURE Guided Gaussian Mixture Image Denoising
- Asymptotically optimal difference-based estimation of variance in nonparametric regression
- Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data
- A Lepskij-type stopping rule for regularized Newton methods
- Regularization of some linear ill-posed problems with discretized random noisy data
- Optimal Choice of a Truncation Level for the Truncated SVD Solution of Linear First Kind Integral Equations When Data are Noisy
- Wavelet decomposition approaches to statistical inverse problems
- Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter
- Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy
- Estimating the Variance In Nonparametric Regression—What is a Reasonable Choice?
- Geometry of linear ill-posed problems in variable Hilbert scales
- On the best rate of adaptive estimation in some inverse problems
- Adaptivity and Oracle Inequalities in Linear Statistical Inverse Problems: A (Numerical) Survey
- Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators
- Adaptive Wavelet Galerkin Methods for Linear Inverse Problems
- Runge–Kutta integrators yield optimal regularization schemes
- Optimal Adaptation for Early Stopping in Statistical Inverse Problems
- On a Problem of Adaptive Estimation in Gaussian White Noise
- Statistical Inverse Estimation in Hilbert Scales
- Computational Methods for Inverse Problems
- Wavelet Deconvolution in a Periodic Setting
- Block Thresholding and Sharp Adaptive Estimation in Severely Ill-Posed Inverse Problems
- On the discrepancy principle and generalised maximum likelihood for regularisation
- Stein Unbiased GrAdient estimator of the Risk (SUGAR) for Multiple Parameter Selection
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Characterizations of Variational Source Conditions, Converse Results, and Maxisets of Spectral Regularization Methods
- How general are general source conditions?
- The Lepskii principle revisited
- Some Comments on C P
- Gaussian model selection
This page was built for publication: Empirical risk minimization as parameter choice rule for general linear regularization methods