Newton-type Methods for Minimax Optimization

From MaRDI portal
Publication:6343762

arXiv2006.14592MaRDI QIDQ6343762

Author name not available (Why is that?)

Publication date: 25 June 2020

Abstract: Differential games, in particular two-player sequential zero-sum games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications, such as adversarial training, generative models and reinforcement learning. However, existing theory mostly focuses on convex-concave functions with few exceptions. In this work, we propose two novel Newton-type algorithms for nonconvex-nonconcave minimax optimization. We prove their local convergence at strict local minimax points, which are surrogates of global solutions. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to strict local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify the effectiveness of our Newton-type algorithms through experiments on training GANs which are intrinsically nonconvex and ill-conditioned. Our code is available at https://github.com/watml/min-max-2nd-order.




Has companion code repository: https://github.com/watml/min-max-2nd-order








This page was built for publication: Newton-type Methods for Minimax Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6343762)