Learning rate of distribution regression with dependent samples
From MaRDI portal
Publication:2171946
DOI10.1016/j.jco.2022.101679OpenAlexW4281551727MaRDI QIDQ2171946
Publication date: 12 September 2022
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2022.101679
Linear inference, regression (62Jxx) Artificial intelligence (68Txx) Nonparametric inference (62Gxx)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- An empirical feature-based learning algorithm producing sparse approximations
- Least square regression with indefinite kernels and coefficient regularization
- Model selection for regularized least-squares algorithm in learning theory
- Regularized least square regression with dependent samples
- Stationarity and mixing properties of the dynamic Tobit model
- Mixing properties of ARMA processes
- Almost sure invariance principles for weakly dependent vector-valued random variables
- Solving the multiple instance problem with axis-parallel rectangles.
- Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables
- Optimal learning rates for distribution regression
- Optimal rates for the regularized least-squares algorithm
- Distributed learning with multi-penalty regularization
- Shannon sampling. II: Connections to learning theory
- On the mathematical foundations of learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Learning Theory for Distribution Regression
- Hilbert space embeddings and metrics on probability measures
- Learning Theory
- Support Vector Machines
- ONLINE LEARNING WITH MARKOV SAMPLING
- Characteristic and Universal Tensor Product Kernels
- Deep distributed convolutional neural networks: Universality
- Robust kernel-based distribution regression
This page was built for publication: Learning rate of distribution regression with dependent samples