Aspects of discrete mathematics and probability in the theory of machine learning
From MaRDI portal
Publication:2478432
DOI10.1016/j.dam.2007.05.040zbMath1142.68059OpenAlexW2038271790MaRDI QIDQ2478432
Publication date: 28 March 2008
Published in: Discrete Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.dam.2007.05.040
concentration of measuremachine learningVapnik-Chervonenkis dimensioncovering numbersuniform Glivenko-Cantelli theorems
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
- Some limit theorems for empirical processes (with discussion)
- The Glivenko-Cantelli problem
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Existence of submatrices with all possible columns
- Central limit theorems for empirical measures
- Efficient distribution-free learning of probabilistic concepts
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Concentration inequalities using the entropy method
- A general lower bound on the number of examples needed for learning
- Weak convergence and empirical processes. With applications to statistics
- A combinatorial problem; stability and order for models and theories in infinitary languages
- On the density of families of sets
- 10.1162/153244303768966111
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- Uniform Central Limit Theorems
- Scale-sensitive dimensions, uniform convergence, and learnability
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- A sharp concentration inequality with applications
- Structural risk minimization over data-dependent hierarchies
- Neural Network Learning
- Probability Inequalities for Sums of Bounded Random Variables
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convergence of stochastic processes
- Combinatorial methods in density estimation
This page was built for publication: Aspects of discrete mathematics and probability in the theory of machine learning