Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures

From MaRDI portal
Revision as of 08:36, 10 July 2024 by Import240710060729 (talk | contribs) (Created automatically from import240710060729)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:6317496

arXiv1904.09369MaRDI QIDQ6317496

Author name not available (Why is that?)

Publication date: 19 April 2019

Abstract: We study online convex optimization under stochastic sub-gradient observation faults, where we introduce adaptive algorithms with minimax optimal regret guarantees. We specifically study scenarios where our sub-gradient observations can be noisy or even completely missing in a stochastic manner. To this end, we propose algorithms based on sub-gradient descent method, which achieve tight minimax optimal regret bounds. When necessary, these algorithms utilize properties of the underlying stochastic settings to optimize their learning rates (step sizes). These optimizations are the main factor in providing the minimax optimal performance guarantees, especially when observations are stochastically missing. However, in real world scenarios, these properties of the underlying stochastic settings may not be revealed to the optimizer. For such a scenario, we propose a blind algorithm that estimates these properties empirically in a generally applicable manner. Through extensive experiments, we show that this empirical approach is a natural combination of regular stochastic gradient descent and the minimax optimal algorithms (which work best for randomized and adversarial function sequences, respectively).












This page was built for publication: Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6317496)