On the convergence of temporal-difference learning with linear function approximation
From MaRDI portal
Publication:5928992
DOI10.1023/A:1007609817671zbMath0969.68088OpenAlexW1575327902MaRDI QIDQ5928992
No author found.
Publication date: 8 October 2001
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1023/a:1007609817671
Related Items (7)
Restricted gradient-descent algorithm for value-function approximation in reinforcement learning ⋮ A policy gradient method for semi-Markov decision processes with application to call admission control ⋮ A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning ⋮ Asymptotic analysis of temporal-difference learning algorithms with constant step-sizes ⋮ Asymptotic analysis of temporal-difference learning algorithms with constant step-sizes ⋮ Natural actor-critic algorithms ⋮ FLOW SHOP SCHEDULING WITH REINFORCEMENT LEARNING
This page was built for publication: On the convergence of temporal-difference learning with linear function approximation