Convergence of the optimal feedback policies in a numerical method for a class of deterministic optimal control problems (Q2753211)

From MaRDI portal





scientific article; zbMATH DE number 1667488
Language Label Description Also known as
English
Convergence of the optimal feedback policies in a numerical method for a class of deterministic optimal control problems
scientific article; zbMATH DE number 1667488

    Statements

    0 references
    0 references
    29 October 2001
    0 references
    optimal control
    0 references
    numerical approximation
    0 references
    rate of convergence
    0 references
    Markov chain approximation
    0 references
    feedback control
    0 references
    finite difference approximation
    0 references
    Convergence of the optimal feedback policies in a numerical method for a class of deterministic optimal control problems (English)
    0 references
    This paper is devoted to a Markov chain based numerical approximation method for a general class of deterministic nonlinear control problems. Methods of this type yield feedback controls which converge (on most of the domain) to the optimization horizon problem on a finite domain in \({\mathbb R}^n\) with deterministic dynamics that are affine in the control variable. The running cost \(L(u,x)\) is quadratic in the control variable \(u\) and is fully nonlinear in the state variable \(x\), and there is no exit cost. By using probabilistic methods, it is shown that, on the regions of strong regularity, the Markov chain method yields a convergent sequence of approximations to an optimal feedback control. The results are illustrated with different computational examples.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references