Federated learning for minimizing nonsmooth convex loss functions
From MaRDI portal
Publication:6112869
DOI10.3934/mfc.2023026zbMath1527.68200OpenAlexW4380685566MaRDI QIDQ6112869
No author found.
Publication date: 7 August 2023
Published in: Mathematical Foundations of Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/mfc.2023026
Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15) Privacy of data (68P27)
Cites Work
- Unregularized online learning algorithms with general loss functions
- Online gradient descent learning algorithms
- Modified Fejér sequences and applications
- Convergence of online mirror descent
- Random gradient-free minimization of convex functions
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Analysis of Online Composite Mirror Descent Algorithm
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Distributed Mirror Descent for Online Composite Optimization