Approximations of the Restless Bandit Problem
From MaRDI portal
Publication:4633023
zbMath1490.62038arXiv1702.06972MaRDI QIDQ4633023
Steffen Grünewälder, Azadeh Khaleghi
Publication date: 2 May 2019
Full work available at URL: https://arxiv.org/abs/1702.06972
Stationary stochastic processes (60G10) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Compound decision problems in statistical decision theory (62C25)
Cites Work
- Unnamed Item
- Regret bounds for restless Markov bandits
- UCB revisited: improved regret bounds for the stochastic multi-armed bandit problem
- Basic properties of strong mixing conditions. A survey and some open questions
- Mixing: Properties and examples
- Asymptotic theory of weakly dependent stochastic processes
- The Complexity of Optimal Queuing Network Control
- Approximation algorithms for restless bandit problems
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Real Analysis and Probability
- Uniform Central Limit Theorems
- Comparison of perturbation bounds for the stationary distribution of a Markov chain
- Finite-time analysis of the multiarmed bandit problem
This page was built for publication: Approximations of the Restless Bandit Problem