Mathematical Research Data Initiative
Main page
Recent changes
Random page
Help about MediaWiki
Create a new Item
Create a new Property
Create a new EntitySchema
Merge two items
In other projects
Discussion
View source
View history
Purge
English
Log in

On the convergence of temporal-difference learning with linear function approximation

From MaRDI portal
Publication:5928992
Jump to:navigation, search

DOI10.1023/A:1007609817671zbMath0969.68088OpenAlexW1575327902MaRDI QIDQ5928992

No author found.

Publication date: 8 October 2001

Published in: Machine Learning (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1023/a:1007609817671


zbMATH Keywords

temporal-difference learning algorithms


Mathematics Subject Classification ID

Computational learning theory (68Q32)


Related Items (7)

Restricted gradient-descent algorithm for value-function approximation in reinforcement learning ⋮ A policy gradient method for semi-Markov decision processes with application to call admission control ⋮ A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning ⋮ Asymptotic analysis of temporal-difference learning algorithms with constant step-sizes ⋮ Asymptotic analysis of temporal-difference learning algorithms with constant step-sizes ⋮ Natural actor-critic algorithms ⋮ FLOW SHOP SCHEDULING WITH REINFORCEMENT LEARNING






This page was built for publication: On the convergence of temporal-difference learning with linear function approximation

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:5928992&oldid=12068182"
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
MaRDI portal item
This page was last edited on 30 January 2024, at 00:26.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki