Learning Bounds for Kernel Regression Using Effective Data Dimensionality

From MaRDI portal
Publication:5706660

DOI10.1162/0899766054323008zbMath1080.68044OpenAlexW2044514896WikidataQ30993366 ScholiaQ30993366MaRDI QIDQ5706660

Tong Zhang

Publication date: 21 November 2005

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1162/0899766054323008




Related Items (33)

A review of distributed statistical inferenceNon-asymptotic error bound for optimal prediction of function-on-function regression by RKHS approachA partially linear framework for massive heterogeneous dataNyström landmark sampling and regularized Christoffel functionsUnnamed ItemFaster Kernel Ridge Regression Using Sketching and PreconditioningLeast square regression with indefinite kernels and coefficient regularizationSpectral algorithms for learning with dependent observationsHARFE: hard-ridge random feature expansionCapacity dependent analysis for functional online learning algorithmsStatistical inference using regularized M-estimation in the reproducing kernel Hilbert space for handling missing dataRandom design analysis of ridge regressionDistributed Bayesian inference in massive spatial dataHigh-Dimensional Analysis of Double Descent for Linear Regression with Random ProjectionsAn Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression ModelsNonparametric distributed learning under general designsDiscrepancy based model selection in statistical inverse problemsKernel regression, minimax rates and effective dimensionality: Beyond the regular caseKernel conjugate gradient methods with random projectionsImportance sampling: intrinsic dimension and computational costGeneral regularization schemes for signal detection in inverse problemsEstimator selection in the Gaussian settingAnalysis of regularized least squares for functional linear regression modelHigh-dimensional regression with unknown varianceConcentration Inequalities for Statistical InferenceOptimal rates for spectral algorithms with least-squares regression over Hilbert spacesAdaptive discretization for signal detection in statistical inverse problemsAnalysis of regularized Nyström subsampling for regression functions of low smoothnessOptimal Rates for Multi-pass Stochastic Gradient MethodsUnnamed ItemDimension independent excess risk by stochastic gradient descentUnnamed ItemDistributed least squares prediction for functional linear regression*



Cites Work


This page was built for publication: Learning Bounds for Kernel Regression Using Effective Data Dimensionality