A fast and simple modification of Newton's method helping to avoid saddle points
From MaRDI portal
Publication:6341898
arXiv2006.01512MaRDI QIDQ6341898
Author name not available (Why is that?)
Publication date: 2 June 2020
Abstract: We propose in this paper New Q-Newton's method. The update rule is very simple conceptually, for example where , with and . Here is an appropriate real number so that is invertible, and are projections to the vector subspaces generated by eigenvectors of positive (correspondingly negative) eigenvalues of . The main result of this paper roughly says that if is (can be unbounded from below) and a sequence , constructed by the New Q-Newton's method from a random initial point , {�f converges}, then the limit point is a critical point and is not a saddle point, and the convergence rate is the same as that of Newton's method. The first author has recently been successful incorporating Backtracking line search to New Q-Newton's method, thus resolving the convergence guarantee issue observed for some (non-smooth) cost functions. An application to quickly finding zeros of a univariate meromorphic function will be discussed. Various experiments are performed, against well known algorithms such as BFGS and Adaptive Cubic Regularization are presented.
Has companion code repository: https://github.com/hphuongdhsp/Q-Newton-method
This page was built for publication: A fast and simple modification of Newton's method helping to avoid saddle points
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6341898)