Strong consistency of random gradient‐free algorithms for distributed optimization
From MaRDI portal
Publication:5346596
DOI10.1002/oca.2254zbMath1362.93172OpenAlexW2344594310MaRDI QIDQ5346596
Publication date: 26 May 2017
Published in: Optimal Control Applications and Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/oca.2254
convergence analysismulti-agent systemsdistributed optimizationGaussian smoothingrandom gradient-free method
Related Items
Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization ⋮ Gradient-free distributed optimization with exact convergence ⋮ A gradient‐free distributed optimization method for convex sum of nonconvex cost functions ⋮ A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes ⋮ A resilient distributed optimization strategy against false data injection attacks ⋮ A fixed step distributed proximal gradient push‐pull algorithm based on integral quadratic constraint
Cites Work
- Unnamed Item
- Distributed stochastic subgradient projection algorithms for convex optimization
- Randomized optimal consensus of multi-agent systems
- Gradient-free method for nonsmooth distributed optimization
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization