Parameter-free accelerated gradient descent for nonconvex minimization
From MaRDI portal
Publication:6507288
arXiv2212.06410MaRDI QIDQ6507288
Author name not available (Why is that?)
Abstract: We propose a new first-order method for minimizing nonconvex functions with a Lipschitz continuous gradient and Hessian. The proposed method is an accelerated gradient descent with two restart mechanisms and finds a solution where the gradient norm is less than in function and gradient evaluations. Unlike existing algorithms with similar complexity bounds, our method is parameter-free because it requires no prior knowledge of problem-dependent parameters, e.g., the Lipschitz constants and the target accuracy . The main challenge in achieving this advantage is estimating the Lipschitz constant of the Hessian using only first-order information. To this end, we develop a new Hessian-free analysis based on two technical inequalities: a Jensen-type inequality for the gradient and an error bound for the trapezoidal rule. Several numerical results illustrate the practicality and robustness of the proposed method.
Has companion code repository: https://github.com/n-marumo/restarted-agd
No records found.
This page was built for publication: Parameter-free accelerated gradient descent for nonconvex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6507288)