Existence of asymptotic values for nonexpansive stochastic control systems
From MaRDI portal
Publication:741140
DOI10.1007/s00245-013-9230-4zbMath1294.93083OpenAlexW2096430401MaRDI QIDQ741140
Rainer Buckdahn, Dan Goreac, Marc Quincampoix
Publication date: 10 September 2014
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-013-9230-4
Related Items (12)
Uniform Assymptotics in the Average Continuous Control of Piecewise Deterministic Markov Processes : Vanishing Approach ⋮ SIR epidemics with state-dependent costs and ICU constraints: a Hamilton-Jacobi verification argument and dual LP algorithms ⋮ Limit value for optimal control with general means ⋮ Representation of asymptotic values for nonexpansive stochastic control systems ⋮ Representation Formulas for Limit Values of Long Run Stochastic Optimal Controls ⋮ Representation of limit values for nonexpansive stochastic differential games ⋮ On representation formulas for long run averaging optimal control problem ⋮ Abel-type results for controlled piecewise deterministic Markov processes ⋮ Asymptotics of values in dynamic games on large intervals ⋮ Uniform Tauberian theorem in differential games ⋮ Asymptotic Control for a Class of Piecewise Deterministic Markov Processes Associated to Temperate Viruses ⋮ Optimality issues for a class of controlled singularly perturbed stochastic systems
Cites Work
- Unnamed Item
- A note on linearization methods and dynamic programming principles for stochastic discontinuous control problems
- Mayer and optimal stopping stochastic control problems with discontinuous cost
- Stochastic optimal control and linear programming approach
- Uniform value in dynamic programming
- Ergodic BSDEs and related PDEs with Neumann boundary conditions
- Limit theorem for controlled backward SDEs and homogenization of Hamilton-Jacobi-Bellman equations
- Time-average control of martingale problems: Existence of a stationary solution
- Approximating value functions for controlled degenerate diffusion processes by using piece-wise constant policies.
- Ergodic problem for the Hamilton-Jacobi-Bellman equation. II
- The value function of singularly perturbed control systems
- On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients
- Discontinuous control problems with state constraints: linear formulations and dynamic programming principles
- Another proof for the equivalence between invariance of closed sets with respect to stochastic and deterministic systems
- Averaging of singularly perturbed controlled stochastic differential equations
- Occupation measures for controlled Markov processes: Characterization and optimality
- On the Existence of a Limit Value in Some Nonexpansive Optimal Control Problems
- Ergodic control of degenerate diffusions
- Linearization techniques for $\mathbb{L}^{\infty}$-control problems and dynamic programming principles in classical and $\mathbb{L}^{\infty}$-control problems
- On ergodic problem for Hamilton-Jacobi-Isaacs equations
- Linear Programming Approach to Deterministic Infinite Horizon Optimal Control Problems with Discounting
- A Uniform Tauberian Theorem in Dynamic Programming
- Existence of stochastic control under state constraints
- On ergodic stochastic control
- Existence of Markov Controls and Characterization of Optimal Markov Controls
- On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations
- Nonlinear Optimal Control via Occupation Measures and LMI-Relaxations
- Set-valued analysis
- Viability theory
This page was built for publication: Existence of asymptotic values for nonexpansive stochastic control systems