Optimal data splitting in distributed optimization for machine learning
From MaRDI portal
Publication:6124406
DOI10.1134/S1064562423701600arXiv2401.07809OpenAlexW4393142227MaRDI QIDQ6124406
No author found.
Publication date: 27 March 2024
Published in: Doklady Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2401.07809
Cites Work
- Unnamed Item
- Lectures on convex optimization
- Newton's method and its use in optimization
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Advances and Open Problems in Federated Learning
- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
This page was built for publication: Optimal data splitting in distributed optimization for machine learning