Learning Graphon Mean Field Games and Approximate Nash Equilibria
From MaRDI portal
Publication:6384654
arXiv2112.01280MaRDI QIDQ6384654
Author name not available (Why is that?)
Publication date: 29 November 2021
Abstract: Recent advances at the intersection of dense large graph limits and mean field games have begun to enable the scalable analysis of a broad class of dynamical sequential games with large numbers of agents. So far, results have been largely limited to graphon mean field systems with continuous-time diffusive or jump dynamics, typically without control and with little focus on computational methods. We propose a novel discrete-time formulation for graphon mean field games as the limit of non-linear dense graph Markov games with weak interaction. On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution in sufficiently large systems. On the practical side, we provide general learning schemes for graphon mean field equilibria by either introducing agent equivalence classes or reformulating the graphon mean field system as a classical mean field system. By repeatedly finding a regularized optimal control solution and its generated mean field, we successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents. Empirically, we are able to demonstrate on a number of examples that the finite-agent behavior comes increasingly close to the mean field behavior for our computed equilibria as the graph or system size grows, verifying our theory. More generally, we successfully apply policy gradient reinforcement learning in conjunction with sequential Monte Carlo methods.
Has companion code repository: https://github.com/tudkcui/gmfg-learning
This page was built for publication: Learning Graphon Mean Field Games and Approximate Nash Equilibria
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6384654)