Stochastic Optimization under Distributional Drift

From MaRDI portal
Publication:6375388

arXiv2108.07356MaRDI QIDQ6375388

Author name not available (Why is that?)

Publication date: 16 August 2021

Abstract: We consider the problem of minimizing a convex function that is evolving according to unknown and possibly stochastic dynamics, which may depend jointly on time and on the decision variable itself. Such problems abound in the machine learning and signal processing literature, under the names of concept drift, stochastic tracking, and performative prediction. We provide novel non-asymptotic convergence guarantees for stochastic algorithms with iterate averaging, focusing on bounds valid both in expectation and with high probability. The efficiency estimates we obtain clearly decouple the contributions of optimization error, gradient noise, and time drift. Notably, we identify a low drift-to-noise regime in which the tracking efficiency of the proximal stochastic gradient method benefits significantly from a step decay schedule. Numerical experiments illustrate our results.




Has companion code repository: https://github.com/joshuacutler/TimeDriftExperiments

No records found.








This page was built for publication: Stochastic Optimization under Distributional Drift

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6375388)