Efficient Algorithms for Smooth Minimax Optimization

From MaRDI portal
Publication:6321430

arXiv1907.01543MaRDI QIDQ6321430

Author name not available (Why is that?)

Publication date: 2 July 2019

Abstract: This paper studies first order methods for solving smooth minimax optimization problems minxmaxyg(x,y) where g(cdot,cdot) is smooth and g(x,cdot) is concave for each x. In terms of g(cdot,y), we consider two settings -- strongly convex and nonconvex -- and improve upon the best known rates in both. For strongly-convex g(cdot,y),forally, we propose a new algorithm combining Mirror-Prox and Nesterov's AGD, and show that it can find global optimum in ildeO(1/k2) iterations, improving over current state-of-the-art rate of O(1/k). We use this result along with an inexact proximal point method to provide ildeO(1/k1/3) rate for finding stationary points in the nonconvex setting where g(cdot,y) can be nonconvex. This improves over current best-known rate of O(1/k1/5). Finally, we instantiate our result for finite nonconvex minimax problems, i.e., minxmax1leqileqmfi(x), with nonconvex fi(cdot), to obtain convergence rate of O(m(logm)3/2/k1/3) total gradient evaluations for finding a stationary point.




Has companion code repository: https://github.com/tkkiran/DIAG








This page was built for publication: Efficient Algorithms for Smooth Minimax Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6321430)