Incremental gradient-free method for nonsmooth distributed optimization
From MaRDI portal
Publication:2411165
DOI10.3934/jimo.2017021OpenAlexW2560284529MaRDI QIDQ2411165
Xiangyu Wang, Guoquan Li, Changzhi Wu, Kwang-Hyo Jung, Jae-Myung Lee, Jueyou Li, Zhi-You Wu
Publication date: 20 October 2017
Published in: Journal of Industrial and Management Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/jimo.2017021
Nonsmooth analysis (49J52) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- A new exact penalty function method for continuous inequality constrained optimization problems
- Incremental proximal methods for large scale convex optimization
- Robust identification
- Incremental gradient algorithms with stepsizes bounded away from zero
- Gradient-free method for nonsmooth distributed optimization
- A derivative-free method for solving large-scale nonlinear systems of equations
- Random gradient-free minimization of convex functions
- A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization
- Maximum flow problem in the distribution network
- A derivative-free method for linearly constrained nonsmooth optimization
- A smoothing scheme for optimization problems with max-min constraints
- Stochastic optimization problems with nondifferentiable cost functionals
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Randomized Smoothing for Stochastic Optimization
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Introduction to Derivative-Free Optimization
- Normalized Incremental Subgradient Algorithm and Its Application
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling