Discrete control processes, dynamic games and multicriterion control problems. (Q698946)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Discrete control processes, dynamic games and multicriterion control problems. |
scientific article; zbMATH DE number 1810317
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Discrete control processes, dynamic games and multicriterion control problems. |
scientific article; zbMATH DE number 1810317 |
Statements
Discrete control processes, dynamic games and multicriterion control problems. (English)
0 references
2002
0 references
The author proposes a recursive definition for numerically evaluating an ``objective cost'' (or state evaluation) at each state of a dynamic system. The dynamic system is assumed to be deterministic with the current state uniquely determined by state and control parameters at the previous time instance. Consequently the state evaluation of the system at each time point is determined uniquely by the state and control parameters at the previous time instance. This general scheme includes several previous models of discrete dynamic control problems including an integral-time criterion, a discrete control problem on a network, a game control model with multiple players and a multi-criterion control problem. It is possible to devise a dynamic-programming-like algorithm for obtaining an optimal controller when the numerical function used in the recursive definition satisfies a monotonicity condition with respect to the previous cost evaluation at the previous state and time.
0 references
discrete-time control
0 references
dynamic programming
0 references
game theory
0 references
state evaluation
0 references
integral-time criterion
0 references
monotonicity condition
0 references