LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
DOI10.1137/18M1229432zbMath1420.49036arXiv1812.04790OpenAlexW2950888428WikidataQ127817026 ScholiaQ127817026MaRDI QIDQ5232205
Vivek S. Borkar, Vladimir Gaitsgory, Ilya A. Shvartsman
Publication date: 30 August 2019
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1812.04790
linear programmingdualitylong-run average optimal controlinfinite horizon and vanishing discount limitssufficient/necessary optimality conditions
Numerical methods involving duality (49M29) Discrete-time control/observation systems (93C55) Duality theory (optimization) (49N15)
Related Items (8)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On sets of occupational measures generated by a deterministic control system on an infinite time horizon
- On representation formulas for long run averaging optimal control problem
- Stochastic optimal control and linear programming approach
- A review of duality theory for linear programming over topological vector spaces
- Approximate solutions of the Bellman equation of deterministic control theory
- The problem of optimal control with reflection studied through a linear optimization problem stated on occupational measures
- Time-average control of martingale problems: A linear programming formulation
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- A convex analytic approach to Markov decision processes
- On the relation between discounted and average optimal value functions
- Tauberian theorem for value functions
- Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time
- Linear programming formulation of long-run average optimal control problem
- Turnpike phenomenon and infinite horizon optimal control
- Occupation measures for controlled Markov processes: Characterization and optimality
- Linear programming based optimality conditions and approximate solution of a deterministic infinite horizon discounted optimal control problem in discrete time
- A Uniform Tauberian Theorem in Optimal Control
- Ergodic Control of Diffusion Processes
- On the Existence of a Limit Value in Some Nonexpansive Optimal Control Problems
- Linearization techniques for $\mathbb{L}^{\infty}$-control problems and dynamic programming principles in classical and $\mathbb{L}^{\infty}$-control problems
- Convex Duality Approach to the Optimal Control of Diffusions
- Linear Programming Approach to Deterministic Infinite Horizon Optimal Control Problems with Discounting
- Constrained Undiscounted Stochastic Dynamic Programming
- Linear Programming and Markov Decision Chains
- A Uniform Tauberian Theorem in Dynamic Programming
- Asymptotic Controllability and Exponential Stabilization of Nonlinear Control Systems at Singular Points
- On ergodic stochastic control
- Existence of Markov Controls and Characterization of Optimal Markov Controls
- Convex Duality and Nonlinear Optimal Control
- The linear programming approach to deterministic optimal control problems
- Stability of the Turnpike Phenomenon in Discrete-Time Optimal Control Problems
- Nonlinear Optimal Control via Occupation Measures and LMI-Relaxations
- Viability theory
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Asymptotic behavior of the value functions of discrete-time discounted optimal control
This page was built for publication: LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case