On the Impossibility of Global Convergence in Multi-Loss Optimization
From MaRDI portal
Publication:6341435
arXiv2005.12649MaRDI QIDQ6341435
Author name not available (Why is that?)
Publication date: 26 May 2020
Abstract: Under mild regularity conditions, gradient-based methods converge globally to a critical point in the single-loss setting. This is known to break down for vanilla gradient descent when moving to multi-loss optimization, but can we hope to build some algorithm with global guarantees? We negatively resolve this open problem by proving that desirable convergence properties cannot simultaneously hold for any algorithm. Our result has more to do with the existence of games with no satisfactory outcomes, than with algorithms per se. More explicitly we construct a two-player game with zero-sum interactions whose losses are both coercive and analytic, but whose only simultaneous critical point is a strict maximum. Any 'reasonable' algorithm, defined to avoid strict maxima, will therefore fail to converge. This is fundamentally different from single losses, where coercivity implies existence of a global minimum. Moreover, we prove that a wide range of existing gradient-based methods almost surely have bounded but non-convergent iterates in a constructed zero-sum game for suitably small learning rates. It nonetheless remains an open question whether such behavior can arise in high-dimensional games of interest to ML practitioners, such as GANs or multi-agent RL.
Has companion code repository: https://github.com/aletcher/impossibility-global-convergence
This page was built for publication: On the Impossibility of Global Convergence in Multi-Loss Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6341435)