On Graduated Optimization for Stochastic Non-Convex Problems

From MaRDI portal
Publication:6259901

arXiv1503.03712MaRDI QIDQ6259901

Author name not available (Why is that?)

Publication date: 12 March 2015

Abstract: The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite its popularity, very little is known in terms of theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimiza- tion and analyze its performance. We characterize a parameterized family of non- convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an {epsilon}-approximate solution within O(1/epsilon^2) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of zero-order optimization, and devise a a variant of our algorithm which converges at rate of O(d^2/epsilon^4).




Has companion code repository: https://github.com/ecotner/ConvexityAnnealing








This page was built for publication: On Graduated Optimization for Stochastic Non-Convex Problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6259901)