scientific article; zbMATH DE number 1014727
From MaRDI portal
Publication:4338419
zbMath0874.93094MaRDI QIDQ4338419
Nico M. van Dijk, Arie Hordijk
Publication date: 10 November 1997
Full work available at URL: https://eudml.org/doc/28389
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
convergencedynamic programmingapproximationdifference methodcontrolled Markov processLax-Richtmeyer theorem
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability methods for approximations in stochastic control and for elliptic equations
- On the finite horizon Bellman equation for controlled Markov jump models with unbounded characteristics: Existence and approximation
- Numerical solution of partial differential equations. Transl. from the German by Peter R. Wadsack
- Extensions of Trotter's operator semigroup approximation theorems
- Survey of the stability of linear finite difference equations
- Discretization and Weak Convergence in Markov Decision Drift Processes
- Average optimal policies in Markov decision drift processes with applications to a queueing and a replacement model
- Markov Decision Drift Processes; Conditions for Optimality Obtained by Discretization
- On the Optimality of $(s,S)$-Policies in Continuous Review Inventory Models
- On the Convergence of the Discrete Time Dynamic Programming Equation for General Semigroups
- Impulsive and continuously acting control of jump processes-time discretization
- Optimal control of the service rate in an M/G/1 queueing system
- Discrete Approximation of Continuous Time Stochastic Control Systems
- Necessary and Sufficient Dynamic Programming Conditions for Continuous Time Stochastic Optimal Control