Efficient algorithms for learning functions with bounded variation
From MaRDI portal
Publication:1887165
DOI10.1016/S0890-5401(03)00164-0zbMath1090.68054OpenAlexW1998017096MaRDI QIDQ1887165
Publication date: 23 November 2004
Published in: Information and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0890-5401(03)00164-0
Nonparametric regressionComputational learning theoryStatistical learning theoryBounded variationSample complexity
Nonparametric regression and quantile regression (62G08) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Computational learning theory (68Q32)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Equivalence of models for polynomial learnability
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Distribution inequalities for the binomial law
- Prediction, learning, uniform convergence, and scale-sensitive dimensions
- Efficient distribution-free learning of probabilistic concepts
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- Toward efficient agnostic learning
- A generalization of Sauer's lemma
- General bounds on the number of examples needed for learning probabilistic concepts
- The complexity of learning according to two models of a drifting environment
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- Bounds on the Number of Examples Needed for Learning Functions
- Covering numbers for real-valued function classes
- Scale-sensitive dimensions, uniform convergence, and learnability
- The importance of convexity in learning with squared loss
- Neural Network Learning
- Convergence of stochastic processes
This page was built for publication: Efficient algorithms for learning functions with bounded variation