Zeroth-order algorithms for stochastic distributed nonconvex optimization
From MaRDI portal
Publication:2151863
DOI10.1016/j.automatica.2022.110353zbMath1494.93146arXiv2106.02958OpenAlexW3171374256MaRDI QIDQ2151863
Shengjun Zhang, Karl Henrik Johansson, Xinlei Yi, Tao Yang
Publication date: 5 July 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2106.02958
stochastic optimizationlinear speedupgradient-freePolyak-Łojasiewicz conditiondistributed nonconvex optimization
Uses Software
Cites Work
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A stochastic subspace approach to gradient-free optimization in high dimensions
- A new one-point residual-feedback oracle for black-box learning and control
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Random gradient-free minimization of convex functions
- Random optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Self-Correcting Geometry in Model-Based Algorithms for Derivative-Free Unconstrained Optimization
- Introduction to Derivative-Free Optimization
- `` Direct Search Solution of Numerical and Statistical Problems
- Derivative-Free and Blackbox Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Stochastic Three Points Method for Unconstrained Smooth Minimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
- Accelerated Distributed Nesterov Gradient Descent
- Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points
- ZONE: Zeroth-Order Nonconvex Multiagent Optimization Over Networks
- Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function
- Derivative-free optimization methods
- Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Simplex Method for Function Minimization
- Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
- Wedge trust region method for derivative free optimization.
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Zeroth-order algorithms for stochastic distributed nonconvex optimization