Sparse model selection under heterogeneous noise: exact penalisation and data-driven thresholding
From MaRDI portal
Publication:2447094
DOI10.1214/14-EJS889zbMath1294.62058MaRDI QIDQ2447094
Laurent Cavalier, Markus Reiss
Publication date: 24 April 2014
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.ejs/1397826707
statistical inverse problemoptimal thresholdpenalized empirical riskrisk hullsparse oracle inequalityfull subset selectionheteroskedastic noise
Related Items (3)
Solution of linear ill-posed problems using overcomplete dictionaries ⋮ Solution of linear ill-posed problems using random dictionaries ⋮ Non-asymptotic bounds for percentiles of independent non-identical random variables
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk hull method and regularization by projections of ill-posed inverse problems
- On oracle inequalities related to data-driven hard thresholding
- Reconstruction of sparse vectors in white Gaussian noise
- Function estimation via wavelet shrinkage for long-memory data
- Oracle inequalities for inverse problems
- Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition
- Long memory continuous time models
- Model selection and sharp asymptotic minimaxity
- Nonlinear estimation for linear inverse problems with error in the operator
- Adapting to unknown sparsity by controlling the false discovery rate
- The Fractional Unit Root Distribution
- Wavelet decomposition approaches to statistical inverse problems
- Wavelet Deconvolution With Noisy Eigenvalues
- Adaptive Wavelet Galerkin Methods for Linear Inverse Problems
- Estimation in a problem of fractional integration
- Adaptive hard-thresholding for linear inverse problems
- Gaussian model selection
This page was built for publication: Sparse model selection under heterogeneous noise: exact penalisation and data-driven thresholding