Large-Scale Methods for Distributionally Robust Optimization
From MaRDI portal
Publication:6351108
arXiv2010.05893MaRDI QIDQ6351108
Author name not available (Why is that?)
Publication date: 12 October 2020
Abstract: We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independent of training set size and number of parameters, making them suitable for large-scale applications. For uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to [Blanchet & Glynn, 2015]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
Has companion code repository: https://github.com/daniellevy/fast-dro
This page was built for publication: Large-Scale Methods for Distributionally Robust Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6351108)