Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
From MaRDI portal
Publication:1657947
DOI10.1214/18-EJS1399zbMath1410.62087arXiv1111.3781OpenAlexW2962925804WikidataQ129560965 ScholiaQ129560965MaRDI QIDQ1657947
Publication date: 14 August 2018
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1111.3781
regularizationreproducing kernel Hilbert spaceserror boundgeneralization errormultiple kernel learningfast learning rate
Hypothesis testing in multivariate analysis (62H15) Learning and adaptive systems in artificial intelligence (68T05)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels
- Sparsity in multiple kernel learning
- Differential evolution. A practical approach to global optimization. With CD-ROM.
- Multi-kernel regularized classifiers
- Advances in differential evolution
- High-dimensional additive modeling
- Information-theoretic determination of minimax rates of convergence
- A Bennett concentration inequality and its application to suprema of empirical processes
- Weak convergence and empirical processes. With applications to statistics
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Some results on Tchebycheffian spline functions and stochastic processes
- Local Rademacher complexities
- Error bounds for learning the kernel
- Support Vector Machines
- Learning Bounds for Support Vector Machines with Learned Kernels
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Model Selection and Estimation in Regression with Grouped Variables
- Convexity, Classification, and Risk Bounds
- New concentration inequalities in product spaces