On solutions of the distributional Bellman equation
DOI10.3934/era.2023228arXiv2202.00081OpenAlexW4380996818MaRDI QIDQ6153120
Julian Gerstenberg, Ralph Neininger, Denis Spiegel
Publication date: 13 February 2024
Published in: Electronic Research Archive (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2202.00081
regular variationMarkov decision processmachine learningrandom difference equationperpetuitydistributional reinforcement learningdistributional Bellman equation
Computational learning theory (68Q32) Stationary stochastic processes (60G10) Numerical analysis or methods applied to Markov chains (65C40) Random operators and equations (aspects of stochastic analysis) (60H25) Numerical solutions to stochastic differential and integral equations (65C30) Numerical methods for difference equations (65Q10)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A fixed point theorem for distributions
- Strict stationarity of generalized autoregressive processes
- Stability of perpetuities
- Conditions for convergence of random coefficient \(\mathrm{AR}(1)\) processes and perpetuities in higher dimensions
- Stochastic Models with Power-Law Tails
- On a stochastic difference equation and a representation of non–negative infinitely divisible random variables
- The stochastic equation Yn+1=AnYn + Bn with stationary coefficients
- Discounted MDP’s: Distribution Functions and Exponential Utility Maximization
- Stability of perpetuities in Markovian environment
- Foundations of Modern Probability
- Distributional Reinforcement Learning
This page was built for publication: On solutions of the distributional Bellman equation