Anderson Acceleration of Proximal Gradient Methods

From MaRDI portal
Publication:6327475

arXiv1910.08590MaRDI QIDQ6327475

Author name not available (Why is that?)

Publication date: 18 October 2019

Abstract: Anderson acceleration is a well-established and simple technique for speeding up fixed-point computations with countless applications. Previous studies of Anderson acceleration in optimization have only been able to provide convergence guarantees for unconstrained and smooth problems. This work introduces novel methods for adapting Anderson acceleration to (non-smooth and constrained) proximal gradient algorithms. Under some technical conditions, we extend the existing local convergence results of Anderson acceleration for smooth fixed-point mappings to the proposed scheme. We also prove analytically that it is not, in general, possible to guarantee global convergence of native Anderson acceleration. We therefore propose a simple scheme for stabilization that combines the global worst-case guarantees of proximal gradient methods with the local adaptation and practical speed-up of Anderson acceleration.




Has companion code repository: https://github.com/vienmai/AA-Prox








This page was built for publication: Anderson Acceleration of Proximal Gradient Methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6327475)