Learning Bounds for Kernel Regression Using Effective Data Dimensionality
From MaRDI portal
Publication:5706660
DOI10.1162/0899766054323008zbMath1080.68044OpenAlexW2044514896WikidataQ30993366 ScholiaQ30993366MaRDI QIDQ5706660
Publication date: 21 November 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/0899766054323008
Related Items (33)
A review of distributed statistical inference ⋮ Non-asymptotic error bound for optimal prediction of function-on-function regression by RKHS approach ⋮ A partially linear framework for massive heterogeneous data ⋮ Nyström landmark sampling and regularized Christoffel functions ⋮ Unnamed Item ⋮ Faster Kernel Ridge Regression Using Sketching and Preconditioning ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Spectral algorithms for learning with dependent observations ⋮ HARFE: hard-ridge random feature expansion ⋮ Capacity dependent analysis for functional online learning algorithms ⋮ Statistical inference using regularized M-estimation in the reproducing kernel Hilbert space for handling missing data ⋮ Random design analysis of ridge regression ⋮ Distributed Bayesian inference in massive spatial data ⋮ High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections ⋮ An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models ⋮ Nonparametric distributed learning under general designs ⋮ Discrepancy based model selection in statistical inverse problems ⋮ Kernel regression, minimax rates and effective dimensionality: Beyond the regular case ⋮ Kernel conjugate gradient methods with random projections ⋮ Importance sampling: intrinsic dimension and computational cost ⋮ General regularization schemes for signal detection in inverse problems ⋮ Estimator selection in the Gaussian setting ⋮ Analysis of regularized least squares for functional linear regression model ⋮ High-dimensional regression with unknown variance ⋮ Concentration Inequalities for Statistical Inference ⋮ Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces ⋮ Adaptive discretization for signal detection in statistical inverse problems ⋮ Analysis of regularized Nyström subsampling for regression functions of low smoothness ⋮ Optimal Rates for Multi-pass Stochastic Gradient Methods ⋮ Unnamed Item ⋮ Dimension independent excess risk by stochastic gradient descent ⋮ Unnamed Item ⋮ Distributed least squares prediction for functional linear regression*
Cites Work
- Optimal global rates of convergence for nonparametric regression
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Covering numbers for support vector machines
- Improving the sample complexity using global data
- The importance of convexity in learning with squared loss
- 10.1162/153244302760200713
- Leave-One-Out Bounds for Kernel Methods
- 10.1162/1532443041424337
This page was built for publication: Learning Bounds for Kernel Regression Using Effective Data Dimensionality