Optimal learning rates for distribution regression
DOI10.1016/j.jco.2019.101426zbMath1435.62259OpenAlexW2969928494MaRDI QIDQ2283125
Zhiying Fang, Ding-Xuan Zhou, Zheng-Chu Guo
Publication date: 30 December 2019
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2019.101426
integral operatorreproducing kernel Hilbert spaceoptimal learning ratedistribution regressionmean embedding
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (7)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On regularization algorithms in learning theory
- Solving the multiple instance problem with axis-parallel rectangles.
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- Learning Theory for Distribution Regression
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Learning Theory
- Deep distributed convolutional neural networks: Universality
- 10.1162/jmlr.2003.3.4-5.651
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
This page was built for publication: Optimal learning rates for distribution regression