A fast and simple modification of Newton's method helping to avoid saddle points

From MaRDI portal
Publication:6341898

arXiv2006.01512MaRDI QIDQ6341898

Author name not available (Why is that?)

Publication date: 2 June 2020

Abstract: We propose in this paper New Q-Newton's method. The update rule is very simple conceptually, for example xn+1=xnwn where wn=prAn,+(vn)prAn,(vn), with An=abla2f(xn)+deltan||ablaf(xn)||2.Id and vn=An1.ablaf(xn). Here deltan is an appropriate real number so that An is invertible, and prAn,pm are projections to the vector subspaces generated by eigenvectors of positive (correspondingly negative) eigenvalues of An. The main result of this paper roughly says that if f is C3 (can be unbounded from below) and a sequence xn, constructed by the New Q-Newton's method from a random initial point x0, {�f converges}, then the limit point is a critical point and is not a saddle point, and the convergence rate is the same as that of Newton's method. The first author has recently been successful incorporating Backtracking line search to New Q-Newton's method, thus resolving the convergence guarantee issue observed for some (non-smooth) cost functions. An application to quickly finding zeros of a univariate meromorphic function will be discussed. Various experiments are performed, against well known algorithms such as BFGS and Adaptive Cubic Regularization are presented.




Has companion code repository: https://github.com/hphuongdhsp/Q-Newton-method








This page was built for publication: A fast and simple modification of Newton's method helping to avoid saddle points

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6341898)