Two adaptively stepped monotone algorithms for solving discounted dynamic programming equations
From MaRDI portal
Publication:4293291
DOI10.1080/01630569308816514zbMath0801.90124OpenAlexW2058625809MaRDI QIDQ4293291
No author found.
Publication date: 1 December 1994
Published in: Numerical Functional Analysis and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/01630569308816514
infinite horizondiscrete timefinite statefinite actiondiscounted, stationary Markov decision processes
Sequential statistical methods (62L99) Dynamic programming in optimal control and differential games (49L20) Dynamic programming (90C39) Markov and semi-Markov decision processes (90C40)
Cites Work
- Unnamed Item
- Monotonicity of Mangasarian's iterative algorithm for generalized linear complementarity problems
- Stochastic optimal control. The discrete time case
- Computational methods in optimization. A unified approach.
- Methods of descent for nondifferentiable optimization
- Dual Approximations in Optimal Control
- Singular Stochastic Control Problems Solved by a Sparse Simplex Method
- Adaptive aggregation methods for infinite horizon dynamic programming
- Technical Note—Accelerated Computation of the Expected Discounted Return in a Markov Chain
- A generalization of the linear complementarity problem
This page was built for publication: Two adaptively stepped monotone algorithms for solving discounted dynamic programming equations