Competitive Gradient Descent
From MaRDI portal
Publication:6319536
arXiv1905.12103MaRDI QIDQ6319536
Author name not available (Why is that?)
Publication date: 28 May 2019
Abstract: We introduce a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. Our method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Using numerical experiments and rigorous analysis, we provide a detailed comparison to methods based on emph{optimism} and emph{consensus} and show that our method avoids making any unnecessary changes to the gradient dynamics while achieving exponential (local) convergence for (locally) convex-concave zero sum games. Convergence and stability properties of our method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. In our numerical experiments on non-convex-concave problems, existing methods are prone to divergence and instability due to their sensitivity to interactions among the players, whereas we never observe divergence of our algorithm. The ability to choose larger stepsizes furthermore allows our algorithm to achieve faster convergence, as measured by the number of model evaluations.
Has companion code repository: https://github.com/wagenaartje/torch-cgd
This page was built for publication: Competitive Gradient Descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319536)