Applying MDL to learn best model granularity
DOI10.1016/S0004-3702(00)00034-5zbMath0948.68092OpenAlexW2170052858WikidataQ128012734 ScholiaQ128012734MaRDI QIDQ1583224
Q. Gao, Paul M. B. Vitányi, Ming Li
Publication date: 26 October 2000
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0004-3702(00)00034-5
Kolmogorov complexityfeedforward neural networkOccam's razorBayes' ruleuniversal priorimproved elastic matchinglearning best model granularitylearning optimal feature extraction intervallearning optimal hidden layerMinimum Description Length principle (MDL)modeling robot armon-line handwritten character recognition
Related Items (2)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Inferring decision trees using the minimum description length principle
- Modeling by shortest data description
- Inductive reasoning and Kolmogorov complexity
- The miraculous universal distribution
- Deductive learning
- Universal coding, information, prediction, and estimation
- A theory of the learnable
- Complexity-based induction systems: Comparisons and convergence theorems
- Minimum description length induction, Bayesianism, and Kolmogorov complexity
- The minimum description length principle in coding and modeling
- An Information Measure for Classification
- The definition of random sequences
- A formal theory of inductive inference. Part I
This page was built for publication: Applying MDL to learn best model granularity