Faster Asynchronous Nonconvex Block Coordinate Descent with Locally Chosen Stepsizes

From MaRDI portal
Publication:6394300

arXiv2203.11307MaRDI QIDQ6394300

Matthew Ubl, Matthew Hale

Publication date: 21 March 2022

Abstract: Distributed nonconvex optimization problems underlie many applications in learning and autonomy, and such problems commonly face asynchrony in agents' computations and communications. When delays in these operations are bounded, they are called partially asynchronous. In this paper, we present an uncoordinated stepsize selection rule for partially asynchronous block coordinate descent that only requires local information to implement, and it leads to faster convergence for a class of nonconvex problems than existing stepsize rules, which require global information in some form. The problems we consider satisfy the error bound condition, and the stepsize rule we present only requires each agent to know (i) a certain type of Lipschitz constant of its block of the gradient of the objective and (ii) the communication delays experienced between it and its neighbors. This formulation requires less information to be available to each agent than existing approaches, typically allows for agents to use much larger stepsizes, and alleviates the impact of stragglers while still guaranteeing convergence to a stationary point. Simulation results provide comparisons and validate the faster convergence attained by the stepsize rule we develop.




Has companion code repository: https://github.com/mattubl/asynch-local-stepsizes








This page was built for publication: Faster Asynchronous Nonconvex Block Coordinate Descent with Locally Chosen Stepsizes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6394300)