Learning with sample dependent hypothesis spaces
From MaRDI portal
Publication:2389476
DOI10.1016/j.camwa.2008.09.014zbMath1165.68388OpenAlexW2079223539MaRDI QIDQ2389476
Publication date: 17 July 2009
Published in: Computers \& Mathematics with Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.camwa.2008.09.014
learning theoryerror analysisapproximation errorregularization schemesample dependent hypothesis spaces
Computational learning theory (68Q32) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (57)
Coefficient regularized regression with non-iid sampling ⋮ Normal estimation on manifolds by gradient learning ⋮ LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Nonparametric regression using needlet kernels for spherical data ⋮ Learning by atomic norm regularization with polynomial kernels ⋮ ERM learning algorithm for multi-class classification ⋮ THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION ⋮ Distributed regression learning with coefficient regularization ⋮ Unnamed Item ⋮ Distributed learning with partial coefficients regularization ⋮ Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection ⋮ Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ Modal additive models with data-driven structure identification ⋮ Learning rates for least square regressions with coefficient regularization ⋮ Distributed learning with multi-penalty regularization ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Learning theory approach to a system identification problem involving atomic norm ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ Regression learning with non-identically and non-independently sampling ⋮ ERM learning with unbounded sampling ⋮ Error analysis for coefficient-based regularized regression in additive models ⋮ Concentration estimates for learning with unbounded sampling ⋮ Consistency analysis of spectral regularization algorithms ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Convergence rate of the semi-supervised greedy algorithm ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Unified approach to coefficient-based regularized regression ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Indefinite kernel network with \(l^q\)-norm regularization ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Perturbation of convex risk minimization and its application in differential private learning algorithms ⋮ Distributed kernel-based gradient descent algorithms ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Least Square Regression with lp-Coefficient Regularization ⋮ Regularized modal regression with data-dependent hypothesis spaces ⋮ Online pairwise learning algorithms with convex loss functions ⋮ Nyström subsampling method for coefficient-based regularized regression ⋮ Error Estimates for Multivariate Regression on Discretized Function Spaces ⋮ Error bounds for learning the kernel ⋮ Distributed learning with indefinite kernels ⋮ Sparse additive machine with ramp loss ⋮ Optimal rates for coefficient-based regularized regression ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Unnamed Item ⋮ Thresholded spectral algorithms for sparse approximations ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
Cites Work
- Semi-supervised learning on Riemannian manifolds
- Multi-kernel regularized classifiers
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Adaptive model selection using empirical complexities
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Regularization networks and support vector machines
- On universal estimators in learning theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Error bounds for learning the kernel
- Capacity of reproducing kernel spaces in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- 10.1162/153244302760200704
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- 10.1162/1532443041424319
- Neural Network Learning
- Algorithmic Learning Theory
- Learning Theory
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Theory of Reproducing Kernels
- Choosing multiple parameters for support vector machines
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Learning with sample dependent hypothesis spaces