A stabilizing reinforcement learning approach for sampled systems with partially unknown models
From MaRDI portal
Publication:6646984
DOI10.1002/RNC.7626MaRDI QIDQ6646984
Stefan Streif, Lukas Beckenbach, Pavel Osinenko
Publication date: 3 December 2024
Published in: International Journal of Robust and Nonlinear Control (Search for Journal in Brave)
Adaptive control/observation systems (93C40) Dynamic programming (90C39) Digital control/observation systems (93C62)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimal control of unknown affine nonlinear discrete-time systems using offline-trained neural networks with proof of convergence
- Integral reinforcement learning and experience replay for adaptive optimal control of partially-unknown constrained-input continuous-time systems
- Neural-network-based robust optimal control design for a class of uncertain nonlinear systems via adaptive dynamic programming
- Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach
- Control Lyapunov functions for adaptive nonlinear stabilization
- Linear programming based Lyapunov function computation for differential inclusions
- Complete stability analysis of a heuristic approximate dynamic programming control design
- Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem
- A `universal' construction of Artstein's theorem on nonlinear stabilization
- A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems
- An alternative converse Lyapunov theorem for discrete-time systems
- Novel iterative neural dynamic programming for data-based approximate optimal control design
- Neural-network-observer-based optimal control for unknown nonlinear systems using adaptive dynamic programming
- Online adaptive algorithm for optimal control with integral reinforcement learning
- Online reinforcement learning for a class of partially unknown continuous-time nonlinear systems via value iteration
- Further facts about input to state stabilization
- Robust reinforcement learning control with static and dynamic stability
- Asymptotic controllability implies feedback stabilization
- OnActor-Critic Algorithms
- On Inf-Convolution-Based Robust Practical Stabilization Under Computational Uncertainty
- Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints
- Approximate Dynamic Programming
- Quadratic approximate dynamic programming for input‐affine systems
- A constructive converse Lyapunov theorem on asymptotic stability for nonlinear autonomous ordinary differential equations
- A Computationally Efficient Robust Model Predictive Control Framework for Uncertain Nonlinear Systems
- On Stochastic Stabilization via Nonsmooth Control Lyapunov Functions
- Sliding-mode surface-based approximate optimal control for nonlinear multiplayer Stackelberg-Nash games via adaptive dynamic programming
- Practical stability analysis of sliding-mode control with explicit computation of sampling time
- Adaptive critic learning for approximate optimal event-triggered tracking control of nonlinear systems with prescribed performances
This page was built for publication: A stabilizing reinforcement learning approach for sampled systems with partially unknown models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6646984)