Learning decision trees from random examples
From MaRDI portal
Publication:1823009
DOI10.1016/0890-5401(89)90001-1zbMath0679.68157OpenAlexW2029058452MaRDI QIDQ1823009
Andrzej Ehrenfeucht, David Haussler
Publication date: 1989
Published in: Information and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0890-5401(89)90001-1
Trees (05C05) Learning and adaptive systems in artificial intelligence (68T05) Management decision making, including multiple objectives (90B50) Boolean functions (06E30)
Related Items (24)
Learning DNF in time \(2^{\widetilde O(n^{1/3})}\) ⋮ On PAC learning algorithms for rich Boolean function classes ⋮ Quantifying inductive bias: AI learning algorithms and Valiant's learning framework ⋮ Knowing what doesn't matter: exploiting the omission of irrelevant data ⋮ Grey-box steganography ⋮ (Nearly-)tight bounds on the contiguity and linearity of cographs ⋮ On (simple) decision tree rank ⋮ Linear threshold functions in decision lists, decision trees, and depth-2 circuits ⋮ Rotation distance for rank bounded trees ⋮ Submodular goal value of Boolean functions ⋮ Logical analysis of data: classification with justification ⋮ Hierarchical linear support vector machine ⋮ A syntactic characterization of bounded-rank decision trees in terms of decision lists ⋮ The Vapnik-Chervonenkis dimension of decision trees with bounded rank ⋮ Grey-Box Steganography ⋮ Rank-\(r\) decision trees are a subclass of \(r\)-decision lists ⋮ Minimization of decision trees is hard to approximate ⋮ A machine discovery from amino acid sequences by decision trees over regular patterns ⋮ Monotone term decision lists ⋮ Decision tree approximations of Boolean functions ⋮ Prediction-preserving reducibility ⋮ An Algebraic Perspective on Boolean Function Learning ⋮ Learning DNF from random walks ⋮ On the isomorphism problem for decision trees and decision lists
Cites Work
- Unnamed Item
- Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
- Occam's razor
- Inferring decision trees using the minimum description length principle
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- Computational limitations on learning from examples
This page was built for publication: Learning decision trees from random examples