Randomized multi-scale kernels learning with sparsity constraint regularization for regression
From MaRDI portal
Publication:5204655
DOI10.1142/S0219691319500486zbMath1440.68206MaRDI QIDQ5204655
Hao Weng, Yinhe Gu, Jian Shi, Xue-mei Dong
Publication date: 5 December 2019
Published in: International Journal of Wavelets, Multiresolution and Information Processing (Search for Journal in Brave)
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Randomized algorithms (68W20)
Related Items (2)
Optimality of the rescaled pure greedy learning algorithms ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Cites Work
- Unnamed Item
- Unnamed Item
- Support-vector networks
- Learning rates of multi-kernel regression by orthogonal greedy algorithm
- Approximation with random bases: pro et contra
- Insights into randomized algorithms for neural networks: practical issues and common pitfalls
- 1D embedding multi-category classification methods
- Learning Theory
- The consistency of least-square regularized regression with negative association sequence
- LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA
This page was built for publication: Randomized multi-scale kernels learning with sparsity constraint regularization for regression