Hamiltonian Descent Methods

From MaRDI portal
Publication:6306690

arXiv1809.05042MaRDI QIDQ6306690

Author name not available (Why is that?)

Publication date: 13 September 2018

Abstract: We propose a family of optimization methods that achieve linear convergence using first-order gradient information and constant step sizes on a class of convex functions much larger than the smooth and strongly convex ones. This larger class includes functions whose second derivatives may be singular or unbounded at their minima. Our methods are discretizations of conformal Hamiltonian dynamics, which generalize the classical momentum method to model the motion of a particle with non-standard kinetic energy exposed to a dissipative force and the gradient field of the function of interest. They are first-order in the sense that they require only gradient computation. Yet, crucially the kinetic gradient map can be designed to incorporate information about the convex conjugate in a fashion that allows for linear convergence on convex functions that may be non-smooth or non-strongly convex. We study in detail one implicit and two explicit methods. For one explicit method, we provide conditions under which it converges to stationary points of non-convex functions. For all, we provide conditions on the convex function and kinetic energy pair that guarantee linear convergence, and show that these conditions can be satisfied by functions with power growth. In sum, these methods expand the class of convex functions on which linear convergence is possible with first-order computation.




Has companion code repository: https://github.com/mocchi-tam/Hamiltonian_Descent_Methods








This page was built for publication: Hamiltonian Descent Methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6306690)