On generalized computable universal priors and their convergence
From MaRDI portal
Publication:860822
DOI10.1016/j.tcs.2006.07.039zbMath1110.03031arXivcs/0503026OpenAlexW2120361535WikidataQ58012433 ScholiaQ58012433MaRDI QIDQ860822
Publication date: 9 January 2007
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/cs/0503026
algorithmic information theoryMartin-Löf randomnesssequence predictionmixture distributionsposterior convergencecomputability conceptsSolomonoff's prioruniversal probability
Computational learning theory (68Q32) Algorithmic information theory (Kolmogorov complexity, etc.) (68Q30) Applications of computability and recursion theory (03D80)
Related Items
On universal prediction and Bayesian confirmation ⋮ Open problems in universal induction \& intelligence
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Universal artificial intelligence. Sequential decisions based on algorithmic probability.
- Zufälligkeit und Wahrscheinlichkeit. Eine algorithmische Begründung der Wahrscheinlichkeitstheorie. (Randomness and probability. An algorithmic foundation of probability theory)
- HIERARCHIES OF GENERALIZED KOLMOGOROV COMPLEXITIES AND NONENUMERABLE UNIVERSAL MEASURES COMPUTABLE IN THE LIMIT
- Von Mises' definition of random sequences reconsidered
- A Theory of Program Size Formally Identical to Information Theory
- Complexity-based induction systems: Comparisons and convergence theorems
- Minimum description length induction, Bayesianism, and Kolmogorov complexity
- Convergence and Error Bounds for Universal Prediction of Nonbinary Sequences
- Learning Theory and Kernel Machines
- Algorithmic Learning Theory
- THE COMPLEXITY OF FINITE OBJECTS AND THE DEVELOPMENT OF THE CONCEPTS OF INFORMATION AND RANDOMNESS BY MEANS OF THE THEORY OF ALGORITHMS
- A formal theory of inductive inference. Part II
- Algorithmic Learning Theory