FedNest: Federated Bilevel, Minimax, and Compositional Optimization
From MaRDI portal
Publication:6398270
arXiv2205.02215MaRDI QIDQ6398270
Author name not available (Why is that?)
Publication date: 4 May 2022
Abstract: Standard federated optimization methods successfully apply to stochastic problems with single-level structure. However, many contemporary ML problems -- including adversarial robustness, hyperparameter tuning, and actor-critic -- fall under nested bilevel programming that subsumes minimax and compositional optimization. In this work, we propose fedblo: A federated alternating stochastic gradient method to address general nested problems. We establish provable convergence rates for fedblo in the presence of heterogeneous data and introduce variations for bilevel, minimax, and compositional optimization. fedblo introduces multiple innovations including federated hypergradient computation and variance reduction to address inner-level heterogeneity. We complement our theory with experiments on hyperparameter & hyper-representation learning and minimax optimization that demonstrate the benefits of our method in practice. Code is available at https://github.com/ucr-optml/FedNest.
Has companion code repository: https://github.com/Tarzanagh/FedNest
This page was built for publication: FedNest: Federated Bilevel, Minimax, and Compositional Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6398270)