Mercer theorem for RKHS on noncompact sets
From MaRDI portal
Publication:558011
DOI10.1016/j.jco.2004.09.002zbMath1094.46021OpenAlexW2083496527MaRDI QIDQ558011
Publication date: 30 June 2005
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2004.09.002
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Set functions and measures on topological spaces (regularity of measures, etc.) (28C15)
Related Items (46)
Machine learning with kernels for portfolio valuation and risk management ⋮ The stochastic string model as a unifying theory of the term structure of interest rates ⋮ Reproducing properties of differentiable Mercer-like kernels ⋮ Asymptotic behaviour of the empirical distance covariance for dependent data ⋮ On the nuclearity of integral operators ⋮ Long signal change-point detection ⋮ Application of integral operator for regularized least-square regression ⋮ A consistent goodness-of-fit test for huge dimensional and functional data ⋮ The convergence rates of Shannon sampling learning algorithms ⋮ On the almost sure convergence for sums of negatively superadditive dependent random vectors in Hilbert spaces and its application ⋮ Unnamed Item ⋮ Reproducing kernel Hilbert spaces associated with kernels on topological spaces ⋮ Eigenvalue decay rates for positive integral operators ⋮ An extension of Mercer's theory to \(L^p\) ⋮ Dependent wild bootstrap for degenerate \(U\)- and \(V\)-statistics ⋮ Some remarks on MCMC estimation of spectra of integral operators ⋮ Entropic regularization of Wasserstein distance between infinite-dimensional Gaussian measures and Gaussian processes ⋮ On the mathematical foundations of stable RKHSs ⋮ Degenerate \(U\)- and \(V\)-statistics under ergodicity: asymptotics, bootstrap and applications in statistics ⋮ On the convergence for weighted sums of Hilbert-valued coordinatewise pairwise NQD random variables and its application ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Reproducing kernels of Sobolev–Slobodeckij˘ spaces via Green’s kernel approach: Theory and applications ⋮ Identifiability of interaction kernels in mean-field equations of interacting particles ⋮ Unsupervised learning of observation functions in state space models by nonparametric moment methods ⋮ Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮ Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices ⋮ Bootstrap for dependent Hilbert space-valued random variables with application to von Mises statistics ⋮ Integral operators generated by multi-scale kernels ⋮ Gaussian kernel quadrature at scaled Gauss-Hermite nodes ⋮ The palm measure and the Voronoi tessellation for the Ginibre process ⋮ An efficient Kriging based method for time-dependent reliability based robust design optimization via evolutionary algorithm ⋮ Kernel-based interpolation at approximate Fekete points ⋮ A new kernel-based approach for linear system identification ⋮ Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ On three families of Karhunen-Loève expansions associated with classical orthogonal polynomials ⋮ Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions ⋮ Generalization Error of Minimum Weighted Norm and Kernel Interpolation ⋮ Testing marginal homogeneity in Hilbert spaces with applications to stock market returns ⋮ Operators with continuous kernels ⋮ Boosting as a kernel-based method ⋮ VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM ⋮ A regression perspective on generalized distance covariance and the Hilbert-Schmidt independence criterion ⋮ Finite Sample Approximations of Exact and Entropic Wasserstein Distances Between Covariance Operators and Gaussian Processes ⋮ A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Cites Work
- Unnamed Item
- Unnamed Item
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- On the Eigenvalues of a Class of Integral Equations Arising in Laser Theory
- Theory of Reproducing Kernels
This page was built for publication: Mercer theorem for RKHS on noncompact sets