Convergence of Markov chains in information divergence
From MaRDI portal
Publication:1014048
DOI10.1007/s10959-007-0133-7zbMath1169.60016OpenAlexW2058409688MaRDI QIDQ1014048
Peter Harremoës, Klaus Kähler Holst
Publication date: 24 April 2009
Published in: Journal of Theoretical Probability (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10959-007-0133-7
Markov chainergodic theoremsinformation divergencereversible Markov chaindecreasing informationincreasing information
Strong limit theorems (60F15) Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Information theory (general) (94A15) Probability theory on linear topological spaces (60B11)
Related Items (4)
Maximum entropy on compact groups ⋮ The maximum entropy rate description of a thermodynamic system in a stationary non-equilibrium state ⋮ Information in Probability: Another Information-Theoretic Proof of a Finite de Finetti Theorem ⋮ Freezing phase transition in a fractal potential
Cites Work
- Sanov property, generalized I-projection and a conditional limit theorem
- Entropy and the central limit theorem
- I-divergence geometry of probability distributions and minimization problems
- Fisher information inequalities and the central limit theorem
- Information theory and the limit-theorem for Markov chains and processes with a countable infinity of states
- Passage to the Limit under the Information and Entropy Signs
- Entropy and the Law of Small Numbers
- Binomial and Poisson distributions as maximum entropy distributions
- Information projections revisited
- Solution of Shannon’s problem on the monotonicity of entropy
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Convergence of Markov chains in information divergence