Estimates on learning rates for multi-penalty distribution regression
From MaRDI portal
Publication:6138930
DOI10.1016/j.acha.2023.101609arXiv2006.09017MaRDI QIDQ6138930
Publication date: 16 January 2024
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.09017
learning theoryintegral operatormulti-penalty regularizationdistributed learninglearning ratedistribution regression
Cites Work
- Unnamed Item
- Unnamed Item
- Multi-penalty regularization in learning theory
- Sparsity in multiple kernel learning
- Optimal learning rates for distribution regression
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Distributed learning with multi-penalty regularization
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Learning Theory for Distribution Regression
- Multi-parameter Tikhonov regularization with the ℓ 0 sparsity constraint
- Multi-penalty regularization with a component-wise penalization
- Shannon sampling and function reconstruction from point values
- Distributed learning with indefinite kernels
- Unifying Divergence Minimization and Statistical Inference Via Convex Duality
- Learning theory of distributed spectral algorithms
This page was built for publication: Estimates on learning rates for multi-penalty distribution regression