Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
From MaRDI portal
Publication:6278270
arXiv1610.00970MaRDI QIDQ6278270
Author name not available (Why is that?)
Publication date: 4 October 2016
Abstract: Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. Unfortunately, these techniques are unable to deal with stochastic perturbations of input data, induced for example by data augmentation. In such cases, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for these settings when the objective is composite and strongly convex. The convergence rate outperforms SGD with a typically much smaller constant factor, which depends on the variance of gradient estimates only due to perturbations on a single example.
Has companion code repository: https://github.com/albietz/stochs
This page was built for publication: Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6278270)