Distributed adaptive online learning for convex optimization with weight decay
From MaRDI portal
Publication:6578717
DOI10.1002/asjc.2489MaRDI QIDQ6578717
Dequan Li, Runyue Fang, Xiongjun Wu, Xiuyu Shen, Yuejin Zhou
Publication date: 25 July 2024
Published in: Asian Journal of Control (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed multi-task classification: a decentralized online learning approach
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- On Convergence Rate of Distributed Stochastic Gradient Algorithm for Convex Optimization with Inequality Constraints
- Finite-Time Connectivity-Preserving Consensus of Networked Nonlinear Agents With Unknown Lipschitz Terms
- Fast Distributed Gradient Methods
- Online Learning and Online Convex Optimization
- Distributed Online Convex Optimization on Time-Varying Directed Graphs
- Distributed Online Optimization in Dynamic Environments Using Mirror Descent
- Decentralized Online Learning With Kernels
- Fastest Mixing Markov Chain on a Graph
- Distributed Subgradient Methods for Multi-Agent Optimization
- Centralized and decentralized distributed control of longitudinal vehicular platoons with non‐uniform communication topology
- Collective dynamics of ‘small-world’ networks
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Distributed Average Tracking of Multiple Time-Varying Reference Signals With Bounded Derivatives
Related Items (1)
This page was built for publication: Distributed adaptive online learning for convex optimization with weight decay