Reproducing property of bounded linear operators and kernel regularized least square regressions
DOI10.1142/S0219691324500139MaRDI QIDQ6591721
Publication date: 22 August 2024
Published in: International Journal of Wavelets, Multiresolution and Information Processing (Search for Journal in Brave)
convergence ratebounded linear operatorfunctional reproducing kernel Hilbert spacekernel regularized regression
Computational learning theory (68Q32) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Rate of convergence, degree of approximation (41A25)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning rates of regularized regression on the unit sphere
- Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels
- Bernstein-type operators and their derivatives
- Learning theory viewpoint of approximation by positive linear operators
- Spherical harmonics and approximations on the unit sphere. An introduction
- On the de la Vallée-Poussin means on the sphere
- Eigenvalues of integral operators defined by smooth positive definite kernels
- Cesàro summability and Marchaud inequality
- Derivative reproducing properties for kernel methods in learning theory
- Learning rates for regularized classifiers using multivariate polynomial kernels
- A note on application of integral operator in learning theory
- Multipliers for (C,a)-bounded Fourier expansions in Banach spaces and approximation theory
- Fractional derivatives and best approximations
- Best approximation and \(K\)-functionals
- A note on Bernstein-Durrmeyer operators in \(L_ 2(S)\)
- Some equivalence theorems with \(K\)-functionals
- Learning rates of multi-kernel regression by orthogonal greedy algorithm
- Regularization in a functional reproducing kernel Hilbert space
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Functional reproducing kernel Hilbert spaces for non-point-evaluation functional data
- Localized polynomial frames on the ball
- Sharp Jackson inequalities
- Approximation on Banach spaces of functions on the sphere
- Approximation with polynomial kernels and SVM classifiers
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On reproducing kernel Banach spaces: generic definitions and unified framework of constructions
- On the mathematical foundations of learning
- Application of integral operator for vector-valued regression learning
- Convergence analysis of coefficient-based regularization under moment incremental condition
- Weighted approximation via Θ-summations of Fourier-Jacobi series
- Summability of Fourier orthogonal series for Jacobi weight on a ball in ℝ^{𝕕}
- A graph-based active learning method for classification of remote sensing images
- Approximation Theory and Harmonic Analysis on Spheres and Balls
- Efficiency of the weak Rescaled Pure Greedy Algorithm
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Comparison theorems on large-margin learning
- Unified error estimate for weak biorthogonal Greedy algorithms
- On the K-functional in learning theory
- Coefficient-based regularization network with variance loss for error
- REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS
- An efficient multiple kernel learning in reproducing kernel Hilbert spaces (RKHS)
- A representer theorem for deep kernel learning
- A Convolution Structure for Jacobi Series
- Optimality of the rescaled pure greedy learning algorithms
- Double graphs regularized multi-view subspace clustering
- Convergence analysis for kernel-regularized online regression associated with an RRKHS
- Learning performance of uncentered kernel-based principal component analysis
- Analysis of regularized least squares ranking with centered reproducing kernel
- On the density of translation networks defined on the unit ball
This page was built for publication: Reproducing property of bounded linear operators and kernel regularized least square regressions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6591721)