Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning

From MaRDI portal
Publication:313256

DOI10.1016/j.automatica.2016.05.017zbMath1343.93006OpenAlexW2430619152MaRDI QIDQ313256

Subramanya P. Nageshrao, Gabriel A. Delgado Lopes, Robert Babuška, Hamidreza Modares, Frank L. Lewis

Publication date: 9 September 2016

Published in: Automatica (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.automatica.2016.05.017




Related Items (25)

Heterogeneous formation control of multiple rotorcrafts with unknown dynamics by reinforcement learningAdaptive fuzzy sliding-mode consensus control of nonlinear under-actuated agents in a near-optimal reinforcement learning frameworkGeneral value iteration based single network approach for constrained optimal controller design of partially-unknown continuous-time nonlinear systemsOutput synchronization of heterogeneous discrete-time systems: a model-free optimal approachFinite‐time adaptive output synchronization of uncertain nonlinear heterogeneous multi‐agent systemsAdaptive distributed observer for an uncertain leader over acyclic switching digraphsOptimal robust formation control for heterogeneous multi‐agent systems based on reinforcement learningReinforcement learning and cooperative \(H_\infty\) output regulation of linear continuous-time multi-agent systemsDistributed output data-driven optimal robust synchronization of heterogeneous multi-agent systemsSpecified convergence rate guaranteed output tracking of discrete-time systems via reinforcement learningLeader-follower time-varying output formation control of heterogeneous systems under cyber attack with active leaderOptimal output synchronization of heterogeneous multi-agent systems using measured input-output dataADP‐based robust consensus for multi‐agent systems with unknown dynamics and random uncertain channelsOff-policy learning for adaptive optimal output synchronization of heterogeneous multi-agent systems\( \mathcal{H}_2\) suboptimal output synchronization of heterogeneous multi-agent systemsCooperative adaptive optimal output regulation of nonlinear discrete-time multi-agent systemsObserver‐based adaptive optimal output containment control problem of linear heterogeneous Multiagent systems with relative output measurementsOutput‐feedback H quadratic tracking control of linear systems using reinforcement learningOff-policy Q-learning: solving Nash equilibrium of multi-player games with network-induced delay and unmeasured stateOutput synchronization control for a class of complex dynamical networks with non-identical dynamicsReinforcement learning for distributed control and multi-player gamesDistributed consensus control for nonlinear multi-agent systemsAdaptive optimal output tracking of continuous-time systems via output-feedback-based reinforcement learningAdaptive distributed observer for an uncertain leader with an unknown output over directed acyclic graphsCooperative output regulation of linear multi-agent systems subject to an uncertain leader system



Cites Work




This page was built for publication: Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning