Mathematical Research Data Initiative
Main page
Recent changes
Random page
Help about MediaWiki
Create a new Item
Create a new Property
Create a new EntitySchema
Merge two items
In other projects
Discussion
View source
View history
Purge
English
Log in

Optimal learning with costly adjustment

From MaRDI portal
Publication:1904629
Jump to:navigation, search

DOI10.1007/BF01211786zbMath0840.90002OpenAlexW2091872464MaRDI QIDQ1904629

Mark Feldman, Michael Spagat

Publication date: 8 February 1996

Published in: Economic Theory (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/bf01211786


zbMATH Keywords

infinite-horizon Bayesian learningstochastic limit belief


Mathematics Subject Classification ID

Decision theory (91B06) Dynamic programming (90C39) Markov and semi-Markov decision processes (90C40) Memory and learning in psychology (91E40)


Related Items

Optimal learning with costly adjustment ⋮ Generalized Bandit Problems



Cites Work

  • On dynamic programming: Compactness of the space of policies
  • On the generic nonconvergence of Bayesian actions and beliefs
  • Optimal learning with costly adjustment
  • Controlling a Stochastic Process with Unknown Parameters
  • Optimal Control of an Unknown Linear Process with Learning
  • Denumerable-Armed Bandits
  • Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
  • Bayesian dynamic programming
  • Switching Costs and the Gittins Index
  • Optimal Learning by Experimentation
  • Discounted Dynamic Programming
  • Unnamed Item
  • Unnamed Item
  • Unnamed Item
  • Unnamed Item
  • Unnamed Item
  • Unnamed Item
  • Unnamed Item
Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:1904629&oldid=14317873"
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
MaRDI portal item
This page was last edited on 1 February 2024, at 15:04.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki