Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks
From MaRDI portal
Publication:5232244
DOI10.1137/18M119046X;zbMath1421.93149arXiv1806.08537MaRDI QIDQ5232244
Mohsen Zamani, Yiguang Hong, Yinghui Wang, Wen-Xiao Zhao
Publication date: 30 August 2019
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.08537
nonsmoothnessdistributed stochastic optimizationrandomized differencesgradient-subgradient-free algorithm
Related Items (5)
Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ Subgradient averaging for multi-agent optimisation with different constraint sets ⋮ Distributed solving linear algebraic equations with switched fractional order dynamics ⋮ Distributed online bandit optimization under random quantization ⋮ Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual algorithm for distributed constrained optimization
- Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed average consensus with least-mean-square deviation
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- Stochastic approximation and its applications
- Random gradient-free minimization of convex functions
- Lexicographic differentiation of nonsmooth functions
- On Convergence Rate of Distributed Stochastic Gradient Algorithm for Convex Optimization with Inequality Constraints
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Fast Distributed Gradient Methods
- Deep Learning: Methods and Applications
- Introduction to Derivative-Free Optimization
- Random-seeking methods for the stochastic unconstrained optimization
- A Kiefer-Wolfowitz algorithm with randomized differences
- Convergence Rate of Distributed ADMM Over Networks
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Simplex Method for Function Minimization
- Stochastic Estimation of the Maximum of a Regression Function
- Probability
This page was built for publication: Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks