Stationary almost Markov perfect equilibria in discounted stochastic games (Q2806812)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Stationary almost Markov perfect equilibria in discounted stochastic games |
scientific article; zbMATH DE number 6582378
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Stationary almost Markov perfect equilibria in discounted stochastic games |
scientific article; zbMATH DE number 6582378 |
Statements
19 May 2016
0 references
discounted stochastic game
0 references
stationary Nash equilibrium
0 references
dynamic Cournot game
0 references
0 references
0 references
0 references
Stationary almost Markov perfect equilibria in discounted stochastic games (English)
0 references
Recent counterexamples by \textit{Y. Levy} [Econometrica 81, No. 5, 1973--2007 (2013; Zbl 1354.91016)] show that a nonzero discounted Markov game may not have a stationary Markov perfect equilibrium.NEWLINENEWLINEIn the paper under review, the authors consider the lightly weaker concept of stationary almost Markov perfect equilibrium (SAMPE): i.e. a Nash equilibrium that depends only on the current and the previous states of the game. They prove the existence of such SAMPE for discounted stochastic games having NEWLINE{\parindent5mmNEWLINE\begin{itemize} \item- a Borel state space \(S\), \item - for each player \(i\), a family of time dependent action sets \(A_i(s), s\in S\) subsets of some compact metric space, \item - reward functions, which are Carathéodory functions, \item - a transition probability \(g\), convex combination of a a finite number of state dependent probabilities, which are all dominated by the same probability measure. NEWLINENEWLINE\end{itemize}}NEWLINENEWLINEAfter proving this result, a large section of the paper is devoted to examples of dynamic Cournot games which admit SAMPE.
0 references