The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
From MaRDI portal
Publication:1983602
DOI10.3150/20-BEJ1307zbMath1473.62109arXiv1811.01061OpenAlexW3193613679MaRDI QIDQ1983602
Steffen Grünewälder, Stephen Page
Publication date: 10 September 2021
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.01061
Density estimation (62G07) Nonparametric estimation (62G05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality
- Structural adaptation via \(\mathbb L_p\)-norm oracle inequalities
- Universal pointwise selection rule in multivariate function estimation
- Risk bounds for model selection via penalization
- On adaptive inverse estimation of linear functional in Hilbert scales
- Weak convergence and empirical processes. With applications to statistics
- Localized algorithms for multiple kernel learning
- Optimal regression rates for SVMs using Gaussian kernels
- Adaptive kernel methods using the balancing principle
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Balancing principle in supervised learning for a general regularization scheme
- Minimal penalties for Gaussian model selection
- Optimal Discretization of Inverse Problems in Hilbert Scales. Regularization and Self-Regularization of Projection Methods
- General Selection Rule from a Family of Linear Estimators
- Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive Estimates
- Model selection for regression on a random design
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Support Vector Machines
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- A new concentration result for regularized risk minimizers
- Probability with Martingales
- Asymptotically Minimax Adaptive Estimation. II. Schemes without Optimal Adaptation: Adaptive Estimators
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- On a Problem of Adaptive Estimation in Gaussian White Noise
- Statistical Inverse Estimation in Hilbert Scales
- Nonlinear Tikhonov regularization in Hilbert scales with balancing principle tuning parameter in statistical inverse problems
- Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- An alternative point of view on Lepski's method
- SCALES OF BANACH SPACES
- Gaussian model selection
- Choosing multiple parameters for support vector machines