Online gradient descent algorithms for functional data learning
DOI10.1016/j.jco.2021.101635OpenAlexW4200574550MaRDI QIDQ2121498
Jun Fan, Xiaming Chen, Bohao Tang, Xin Guo
Publication date: 4 April 2022
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2021.101635
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (4)
Uses Software
Cites Work
- Unnamed Item
- A reproducing kernel Hilbert space approach to functional linear regression
- Prediction in functional linear regression
- On regularization algorithms in learning theory
- Methodology and convergence rates for functional linear regression
- Online gradient descent learning algorithms
- Fast and strong convergence of online learning algorithms
- Optimal rates for the regularized least-squares algorithm
- Functional data analysis.
- An RKHS approach to estimate individualized treatment rules based on functional predictors
- Error analysis on regularized regression based on the maximum correntropy criterion
- Sparse regularized learning in the reproducing kernel Banach spaces with the \(\ell^1\) norm
- Kernel regression, minimax rates and effective dimensionality: Beyond the regular case
- Learning Theory
- Support Vector Machines
- A new concentration result for regularized risk minimizers
- Any Discrimination Rule Can Have an Arbitrarily Bad Probability of Error for Finite Sample Size
- Minimax and Adaptive Prediction for Functional Linear Regression
- Distributed learning with indefinite kernels
- Shorter Notes: Some Operator Monotone Functions
- $A \geq B \geq 0$ Assures $(B^r A^p B^r)^{1/q} \geq B^{(p+2r)/q$ for $r \geq 0$, $p \geq 0$, $q \geq 1$ with $(1 + 2r)q \geq p + 2r$
- Optimal learning with Gaussians and correntropy loss
This page was built for publication: Online gradient descent algorithms for functional data learning