When is the optimal regularization parameter insensitive to the choice of the loss function?
DOI10.1080/03610929008830285zbMath0724.62044OpenAlexW2024602863WikidataQ63508141 ScholiaQ63508141MaRDI QIDQ3212129
Publication date: 1990
Published in: Communications in Statistics - Theory and Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/03610929008830285
convergence ratesFredholm integral equationdeconvolutionmean square errorgeneralized cross-validationpenalty functionaloptimal smoothing parameteroptimal regularization parameterfirst kind integral equations with noisy data
Density estimation (62G07) Numerical methods for integral equations (65R20) Nonparametric inference (62G99)
Related Items (15)
Cites Work
- Unnamed Item
- A statistical perspective on ill-posed inverse problems (with discussion)
- Error bounds for derivative estimates based on spline smoothing of exact or noisy data
- Spline smoothing and optimal rates of convergence in nonparametric regression models
- A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem
- Asymptotic optimality of \(C_ L\) and generalized cross-validation in ridge regression with application to spline smoothing
- Convergence rates for regularized solutions of integral equations from discrete noisy data
- Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation
- Improved estimates of statistical regularization parameters in Fourier differentiation and smoothing
- Cross-Validated Spline Methods for the Estimation of Three-Dimensional Tumor Size Distributions from Observations on Two-Dimensional Cross Sections
- Convergence Rates for Regularized Solutions
This page was built for publication: When is the optimal regularization parameter insensitive to the choice of the loss function?