Prediction, learning, uniform convergence, and scale-sensitive dimensions
From MaRDI portal
Publication:1271550
DOI10.1006/jcss.1997.1557zbMath0945.68529OpenAlexW2071570468MaRDI QIDQ1271550
Philip M. Long, Bartlett, Peter L.
Publication date: 27 September 2000
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/jcss.1997.1557
Related Items (8)
Efficient algorithms for learning functions with bounded variation ⋮ On the complexity of learning from drifting distributions ⋮ The learnability of quantum states ⋮ Corrigendum to: ``Prediction, learning, uniform convergence, and scale-sensitive dimensions ⋮ Learning bounds for quantum circuits in the agnostic setting ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Using the doubling dimension to analyze the generalization of learning algorithms ⋮ Unnamed Item
Cites Work
- Learnability with respect to fixed distributions
- Equivalence of models for polynomial learnability
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Efficient distribution-free learning of probabilistic concepts
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- Toward efficient agnostic learning
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- A general lower bound on the number of examples needed for learning
- Fat-shattering and the learnability of real-valued functions
- Learnability and the Vapnik-Chervonenkis dimension
- Scale-sensitive dimensions, uniform convergence, and learnability
- Convergence of stochastic processes
This page was built for publication: Prediction, learning, uniform convergence, and scale-sensitive dimensions