Infinite Horizon Stochastic Optimal Control Problems with Running Maximum Cost
DOI10.1137/17M115253XzbMath1400.49027OpenAlexW2766347619WikidataQ129206505 ScholiaQ129206505MaRDI QIDQ4684783
Axel Kröner, Hasnaa Zidani, Athena Picarelli
Publication date: 25 September 2018
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/17m115253x
convergencedynamic programmingstochastic optimal controlsemi-Lagrangian schemesviscosity solutionsrunning maximum
Dynamic programming in optimal control and differential games (49L20) Optimal stochastic control (93E20) Stability and convergence of numerical methods for initial value and initial-boundary value problems involving PDEs (65M12) Existence theories for optimal control problems involving partial differential equations (49J20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Optimality conditions for problems involving randomness (49K45)
Related Items (2)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- \((L^\infty+\mathrm{Bolza})\) control problems as dynamic differential games
- Homogenization in \(L^\infty\)
- Fully nonlinear oblique derivative problems for nonlinear second-order elliptic PDE's
- Zubov's equation for state-constrained perturbed nonlinear systems
- Zubov's method for controlled diffusions with state constraints
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- On oblique derivative problems for fully nonlinear second-order elliptic PDE's on domains with corners
- Discontinuous differential games and control systems with supremum cost
- Consistency of a simple multidimensional scheme for Hamilton-Jacobi-Bellman equations
- Dynamic programming and error estimates for stochastic control problems with maximum cost
- Convergent difference schemes for nonlinear parabolic equations and mean curvature motion
- Semi-Lagrangian schemes for linear and fully non-linear diffusion equations
- The Bellman equation for minimizing the maximum cost
- Weak Dynamic Programming Principle for Viscosity Solutions
- Linearization techniques for $\mathbb{L}^{\infty}$-control problems and dynamic programming principles in classical and $\mathbb{L}^{\infty}$-control problems
- Optimal control of diffustion processes and hamilton-jacobi-bellman equations part I: the dynamic programming principle and application
- Optimal Control of the Running Max
- On uniqueness and existence of viscosity solutions of fully nonlinear second-order elliptic PDE's
- User’s guide to viscosity solutions of second order partial differential equations
- Optimal Control on the $L^\infty $ Norm of a Diffusion Process
- Consistency of Generalized Finite Difference Schemes for the Stochastic HJB Equation
- Value function and optimal trajectories for a maximum running cost control problem with state constraints. Application to an abort landing problem
- CONVERGENCE OF NUMERICAL SCHEMES FOR PARABOLIC EQUATIONS ARISING IN FINANCE THEORY
- An approximation scheme for the optimal control of diffusion processes
- The Bellman equation for control of the running max of a diffusion and applications to look-back options
- Some Estimates for Finite Difference Approximations
- A fast algorithm for the two dimensional HJB equation of stochastic control
- A Bellman's equation for minimizing the maximum cost
This page was built for publication: Infinite Horizon Stochastic Optimal Control Problems with Running Maximum Cost