Shannon sampling. II: Connections to learning theory

From MaRDI portal
Publication:2581447

DOI10.1016/j.acha.2005.03.001zbMath1107.94008OpenAlexW2027057364MaRDI QIDQ2581447

Ding-Xuan Zhou, Stephen Smale

Publication date: 10 January 2006

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.acha.2005.03.001




Related Items (98)

Learning performance of regularized moving least square regressionOnline regression with unbounded samplingEfficient kernel-based variable selection with sparsistencyLOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORSImproved sampling and reconstruction in spline subspacesLEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENTWhittaker-Kotel'nikov-Shannon approximation of \(\phi\)-sub-Gaussian random processesMulti-penalty regularization in learning theoryGeometry on probability spacesEfficiency of classification methods based on empirical risk minimizationFully online classification by regularizationHermite learning with gradient dataRegularized least square regression with dependent samplesSampling theory, oblique projections and a question by Smale and ZhouTHE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTIONDivergence-free quasi-interpolationApplication of integral operator for regularized least-square regressionThe convergence rates of Shannon sampling learning algorithmsVon Neumann indices and classes of positive definite functionsDistributed parametric and nonparametric regression with on-line performance bounds computationOn regularization algorithms in learning theoryLearning rate of distribution regression with dependent samplesRegularized least square regression with unbounded and dependent samplingIntegral operator approach to learning theory with unbounded samplingError estimates from noise samples for iterative algorithm in shift-invariant signal spacesDistributed learning with multi-penalty regularizationLearning theory of distributed spectral algorithmsKernel Methods for the Approximation of Nonlinear SystemsLeast square regression with indefinite kernels and coefficient regularizationInfinite-dimensional stochastic transforms and reproducing kernel Hilbert spaceSpherical random sampling of localized functions on 𝕊ⁿ⁻¹Generalization errors of Laplacian regularized least squares regressionLearning gradients via an early stopping gradient descent methodRandom sampling of signals concentrated on compact set in localized reproducing kernel subspace of \(L^p (\mathbb{R}^n)\)Higher Order Difference Operators and Associated Relative Reproducing Kernel Hilbert SpacesConvergence analysis of Tikhonov regularization for non-linear statistical inverse problemsApproximation of Lyapunov functions from noisy dataConvex regularization in statistical inverse learning problemsUnnamed ItemOptimality of regularized least squares ranking with imperfect kernelsEstimates on learning rates for multi-penalty distribution regressionOn complex-valued 2D eikonals. IV: continuation past a causticHarmonic analysis of network systems via kernels and their boundary realizationsSpectral Algorithms for Supervised LearningSparse Gaussian processes for solving nonlinear PDEsSketching with Spherical Designs for Noisy Data Fitting on SpheresApproximation properties of mixed sampling-Kantorovich operatorsError bounds for \(l^p\)-norm multiple kernel learning with least square lossBias corrected regularization kernel method in rankingSampling in Paley-Wiener spaces on combinatorial graphsUnified approach to coefficient-based regularized regressionLearning with varying insensitive lossDistributed learning and distribution regression of coefficient regularizationConvergence analysis of online algorithmsBehavior of a functional in learning theoryOptimal rates for regularization of statistical inverse learning problemsReproducing kernels and choices of associated feature spaces, in the form of \(L^2\)-spacesCROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORYRobust kernel-based distribution regressionLearning with tensors: a framework based on convex optimization and spectral regularizationSimultaneous estimates for vector-valued Gabor frames of Hermite functionsREGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELSConvergence rates of learning algorithms by random projectionThe convergence rate of a regularized ranking algorithmConsistency of regularized spectral clusteringAverage sampling and reconstruction in a reproducing kernel subspace of homogeneous type spaceApproximation analysis of gradient descent algorithm for bipartite rankingFrames, Riesz bases, and sampling expansions in Banach spaces via semi-inner productsOn the existence of optimal unions of subspaces for data modeling and clusteringCoefficient-based regression with non-identical unbounded samplingUniform bounds of aliasing and truncated errors in sampling series of functions from anisotropic Besov classLeast-square regularized regression with non-iid samplingApproximation with polynomial kernels and SVM classifiersLEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATAApplication of integral operator for vector-valued regression learningLeast Square Regression with lp-Coefficient RegularizationRandom sampling and reconstruction of concentrated signals in a reproducing kernel spaceVariational splines and Paley-Wiener spaces on Combinatorial graphsSystem identification using kernel-based regularization: new insights on stability and consistency issuesError analysis of multicategory support vector machine classifiersLearning rates for the risk of kernel-based quantile regression estimators in additive modelsONLINE LEARNING WITH MARKOV SAMPLINGA note on application of integral operator in learning theoryGradient-Based Kernel Dimension Reduction for RegressionAnalysis of support vector machines regressionGeneral A-P iterative algorithm in shift-invariant spacesDistributed learning with indefinite kernelsVECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITYConvolution sampling and reconstruction of signals in a reproducing kernel subspaceReconstructing signals with finite rate of innovation from noisy samplesDebiased magnitude-preserving ranking: learning rate and bias characterizationAn elementary analysis of ridge regression with random designUnnamed ItemHigh order Parzen windows and randomized samplingOptimal rates for coefficient-based regularized regressionGradient learning in a classification setting by gradient descentGeneralization performance of Gaussian kernels SVMC based on Markov samplingSampling and reconstruction for shift-invariant stochastic processes



Cites Work


This page was built for publication: Shannon sampling. II: Connections to learning theory