On the sufficiency of the Hamilton-Jacobi-Bellman equation for optimality of the controls in a linear optimal-time problem (Q1079174)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: On the sufficiency of the Hamilton-Jacobi-Bellman equation for optimality of the controls in a linear optimal-time problem |
scientific article; zbMATH DE number 3962582
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | On the sufficiency of the Hamilton-Jacobi-Bellman equation for optimality of the controls in a linear optimal-time problem |
scientific article; zbMATH DE number 3962582 |
Statements
On the sufficiency of the Hamilton-Jacobi-Bellman equation for optimality of the controls in a linear optimal-time problem (English)
0 references
1986
0 references
This paper deals with the Hamilton-Jacobi-Bellman (HJB) equation associated with the optimal-time control problem of autonomous finite- dimensional linear systems. Although the principal result established in Theorem 4.2 is true, it is almost useless, because it gives a sufficient condition for a solution of the HJB equation to be the optimal cost function, in terms of the generalized gradients of the optimal cost function itself. In other words, it says that in order to verify whether (or not) a solution of the HJB equation is the desired solution (i.e. the optimal cost function), it is necessary to know previously that desired solution! Also, there are some errors and deficiencies of expression. Among them, the following: 1) The hypothesis that the set of controls U contains 0 in its interior is mentioned between brackets, although that hypothesis is essential to prove that the optimal cost function is Lipschitz-continuous. 2) In (4.2) the correct expression of \(\hat w\) is: \(\hat w(t)=L(t))'\hat w(0)\). 3) In definition 2.2, the expression: ''... and E(t) is constant everywhere'' is confusing, because that property is a consequence of the first part of the definition of maximal controls and not a part of the definition. 4) The correct expression of Theorem 4.1 should be the following: If, for some \(x\in S\), \((\hat w(\cdot),\hat v(\cdot))\) satisfy..., then \(\hat v\) is optimal for P(x).
0 references
Hamilton-Jacobi-Bellman (HJB) equation
0 references
optimal-time control
0 references
autonomous finite-dimensional linear systems
0 references
generalized gradients
0 references
0 references