An Accelerated Value/Policy Iteration Scheme for Optimal Control Problems and Games
From MaRDI portal
Publication:5264875
DOI10.1007/978-3-319-10705-9_48zbMath1328.65142OpenAlexW625463455MaRDI QIDQ5264875
Alessandro Alla, Maurizio Falcone, Dante Kalise
Publication date: 28 July 2015
Published in: Lecture Notes in Computational Science and Engineering (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-319-10705-9_48
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An Efficient Policy Iteration Algorithm for Dynamic Programming Equations
- Some Convergence Results for Howard's Algorithm
- An optimal one-way multigrid algorithm for discrete-time stochastic control
- On the Convergence of Policy Iteration in Stationary Dynamic Programming
- Convergence Properties of Policy Iteration
- Algorithms for Stochastic Games with Geometrical Interpretation
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
This page was built for publication: An Accelerated Value/Policy Iteration Scheme for Optimal Control Problems and Games