Average-discounted equilibria in stochastic games (Q1806896)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Average-discounted equilibria in stochastic games |
scientific article; zbMATH DE number 1358461
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Average-discounted equilibria in stochastic games |
scientific article; zbMATH DE number 1358461 |
Statements
Average-discounted equilibria in stochastic games (English)
0 references
5 October 2000
0 references
Stochastic two-person non-zero-sum games with finite state and action spaces are examined when player 1 is interested in the limiting average, player 2 in the \(\beta\)-discounted reward. By the nature of these rewards, player 1's reward emphasizes what happens in the far, player 2's in the near future. It can be proved that for all \(\varepsilon>0\) and all \(\beta\in(0,1)\), there exists a stationary Nash \(\varepsilon\)-equilibrium. The same result holds within the class of ultimately stationary strategies, i.e. that the players play up to some stage \(N\) Markov and after it stationary strategies. An example is given what demonstrates that 0-equilibria do not necessarily exist, not even in terms of history-dependent strategies. However, in some classes of stochastic games with special transition and payoff structures (unichain, perfect information switching control, ARAT, repeated games with absorbing states), (\(\varepsilon\)-)equilibria can be achieved.
0 references
stochastic game
0 references
average-discounted equilibrium
0 references
0 references
0.9290929
0 references
0 references
0.92637753
0 references
0.9232372
0 references
0.9187615
0 references
0.9184269
0 references
0.91796094
0 references