Boosting Frank-Wolfe by Chasing Gradients

From MaRDI portal
Publication:6336715

arXiv2003.06369MaRDI QIDQ6336715

Author name not available (Why is that?)

Publication date: 13 March 2020

Abstract: The Frank-Wolfe algorithm has become a popular first-order optimization algorithm for it is simple and projection-free, and it has been successfully applied to a variety of real-world problems. Its main drawback however lies in its convergence rate, which can be excessively slow due to naive descent directions. We propose to speed up the Frank-Wolfe algorithm by better aligning the descent direction with that of the negative gradient via a subroutine. This subroutine chases the negative gradient direction in a matching pursuit-style while still preserving the projection-free property. Although the approach is reasonably natural, it produces very significant results. We derive convergence rates mathcalO(1/t) to mathcalO(eomegat) of our method and we demonstrate its competitive advantage both per iteration and in CPU time over the state-of-the-art in a series of computational experiments.




Has companion code repository: https://github.com/cyrillewcombettes/boostfw








This page was built for publication: Boosting Frank-Wolfe by Chasing Gradients

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6336715)