Hamilton-Jacobi characterization of the state constrained value
From MaRDI portal
Publication:1775876
DOI10.1016/j.na.2003.06.005zbMath1098.49005OpenAlexW2068323724MaRDI QIDQ1775876
Frank H. Clarke, Ronald J. Stern
Publication date: 4 May 2005
Published in: Nonlinear Analysis. Theory, Methods \& Applications. Series A: Theory and Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.na.2003.06.005
Dynamic programming in optimal control and differential games (49L20) Nonsmooth analysis (49J52) Existence theories for optimal control problems involving ordinary differential equations (49J15) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25)
Related Items
Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints, Hamilton–Jacobi–Bellman Equations, Legendre transform and applications to finite and infinite optimization, Infinite horizon problems on stratifiable state-constraints sets
Cites Work
- Unnamed Item
- Unnamed Item
- Necessary conditions for free end-time, measurably time dependent optimal control problems with state constraints
- Optimality principles and representation formulas for viscosity solutions of Hamilton-Jacobi equations. II. Equations of control problems with state constraints
- On nonconvex differential inclusions whose state is constrained in the closure of an open set. Applications to dynamic programming
- Qualitative properties of trajectories of control systems: a survey
- Existence of neighboring feasible trajectories: applications to dynamic programming for state-constrained optimal control problems
- Filippov's and Filippov-Ważewski's theorems on closed domains
- Optimization and nonsmooth analysis
- Optimal Control with State-Space Constraint I
- Clarke's tangent cones and the boundaries of closed sets in Rn
- Feedback in state constrained optimal control
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations