State-Constrained Stochastic Optimal Control Problems via Reachability Approach
DOI10.1137/15M1023737zbMath1348.49015OpenAlexW2411640870MaRDI QIDQ2822794
Olivier Bokanowski, Athena Picarelli, Hasnaa Zidani
Publication date: 5 October 2016
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/15m1023737
state constraintsviscosity solutionstochastic optimal controlHamilton-Jacobi equationsstochastic target problems
Dynamic programming in optimal control and differential games (49L20) Nonlinear parabolic equations (35K55) Optimal stochastic control (93E20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Optimality conditions for problems involving randomness (49K45) Existence of optimal solutions to problems involving randomness (49J55)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets
- Optimal control versus stochastic target problems: an equivalence result
- Optimal stochastic control, stochastic target problems, and backward SDE.
- Uniqueness results for boundary value problems arising from finite fuel and other singular and unbounded stochastic control problems
- On some recent aspects of stochastic control and their applications
- Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations
- Nonlinear elliptic equations with singular boundary conditions and stochastic control with state constraints. I: The model problem
- Optimal times for constrained nonlinear control problems without local controllability
- Numerical schemes for dicontinuous value functions of optimal control
- Dynamic programming for stochastic target problems and geometric flows
- The viability kernel algorithm for computing value functions of infinite horizon optimal control problems
- Existence of neighboring feasible trajectories: applications to dynamic programming for state-constrained optimal control problems
- Infinite horizon problems on stratifiable state-constraints sets
- Dynamic programming and error estimates for stochastic control problems with maximum cost
- Hamilton-Jacobi-Bellman equations with fast gradient-dependence
- Reachability and Minimal Times for State Constrained Nonlinear Problems without Any Controllability Assumption
- Stochastic Target Problems with Controlled Loss
- Optimal Control under Stochastic Target Constraints
- On the Existence of Optimal Controls
- Hamilton-Jacobi Equations with State Constraints
- Numerical Approximation for a Superreplication Problem under Gamma Constraints
- Optimal Control with State-Space Constraint I
- Optimal Control with State-Space Constraint. II
- Existence Theory for a Stochastic Bolza Problem
- User’s guide to viscosity solutions of second order partial differential equations
- A strong comparison result for the bellman equation arising in stochastic exit time control problems and its applications
- Level Sets of Viscosity Solutions: some Applications to Fronts and Rendez-vous Problems
- A class of stochastic optimal control problems with state constraint
- Stochastic Target Problems, Dynamic Programming, and Viscosity Solutions
- A STOCHASTIC REPRESENTATION FOR THE LEVEL SET EQUATIONS
- A New Formulation of State Constraint Problems for First-Order PDEs
- Weak Dynamic Programming for Generalized State Constraints
- A general Hamilton-Jacobi framework for non-linear state-constrained control problems
- Ellipsoidal Techniques for Reachability Under State Constraints
- Hamilton–Jacobi Formulation for Reach–Avoid Differential Games